Test Report: Docker_Linux_crio 19450

                    
                      8d898ab9c8ea504736c6a6ac30beb8b93591f909:2024-08-15:35798
                    
                

Test fail (4/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 152.29
36 TestAddons/parallel/MetricsServer 331.87
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.08
171 TestMultiControlPlane/serial/DeleteSecondaryNode 14.48
x
+
TestAddons/parallel/Ingress (152.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-703024 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-703024 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-703024 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dd363164-79e6-4e5e-a89e-c4ebcda828ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dd363164-79e6-4e5e-a89e-c4ebcda828ae] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003879527s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-703024 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.608583707s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-703024 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-703024 addons disable ingress-dns --alsologtostderr -v=1: (1.170051566s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-703024 addons disable ingress --alsologtostderr -v=1: (7.725783404s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-703024
helpers_test.go:235: (dbg) docker inspect addons-703024:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f",
	        "Created": "2024-08-15T17:05:31.00634759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 386147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T17:05:31.108071249Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:49d4702e5c94195d7796cb79f5fbc9d7cc584c1c41f3c58bf1694d1da009b2f6",
	        "ResolvConfPath": "/var/lib/docker/containers/2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f/hosts",
	        "LogPath": "/var/lib/docker/containers/2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f/2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f-json.log",
	        "Name": "/addons-703024",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-703024:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-703024",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af4570059a0f0808481c40ea677a6be381ccd02833f96d974e8555f4e9622388-init/diff:/var/lib/docker/overlay2/debad26787101f2e0bd77abae2a4f62ccd76a5180cc196365483720250fb2357/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af4570059a0f0808481c40ea677a6be381ccd02833f96d974e8555f4e9622388/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af4570059a0f0808481c40ea677a6be381ccd02833f96d974e8555f4e9622388/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af4570059a0f0808481c40ea677a6be381ccd02833f96d974e8555f4e9622388/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-703024",
	                "Source": "/var/lib/docker/volumes/addons-703024/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-703024",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-703024",
	                "name.minikube.sigs.k8s.io": "addons-703024",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b0c24a6e73bd999708ab5ad9f98c76d95319fb0fb88fa8553446a35e7e83eb0",
	            "SandboxKey": "/var/run/docker/netns/6b0c24a6e73b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-703024": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1c3f8d471e58852380a4ac912f81ccc3ecb004bd521310a2ab761467bf472c1",
	                    "EndpointID": "2194ee0aa57dc96f114bf91e71e00a8ed99b086bb2586042eeff49fb75dbb5d0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-703024",
	                        "2d94eb4aadd4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-703024 -n addons-703024
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-703024 logs -n 25: (1.068164707s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-962475 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | download-docker-962475                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-962475                                                                   | download-docker-962475 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-527485   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | binary-mirror-527485                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39117                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-527485                                                                     | binary-mirror-527485   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| addons  | enable dashboard -p                                                                         | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | addons-703024                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | addons-703024                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-703024 --wait=true                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:07 UTC | 15 Aug 24 17:07 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:07 UTC | 15 Aug 24 17:07 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-703024 ssh cat                                                                       | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:07 UTC | 15 Aug 24 17:07 UTC |
	|         | /opt/local-path-provisioner/pvc-50d57a12-86e5-43f7-b121-a6d8b09e9508_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:07 UTC | 15 Aug 24 17:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-703024 ip                                                                            | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | -p addons-703024                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | addons-703024                                                                               |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | -p addons-703024                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-703024 addons                                                                        | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | addons-703024                                                                               |                        |         |         |                     |                     |
	| addons  | addons-703024 addons                                                                        | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-703024 ssh curl -s                                                                   | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-703024 ip                                                                            | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:05:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:05:08.718581  385407 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:08.718720  385407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:08.718730  385407 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:08.718734  385407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:08.719067  385407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:05:08.719775  385407 out.go:352] Setting JSON to false
	I0815 17:05:08.720701  385407 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6461,"bootTime":1723735048,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:05:08.720761  385407 start.go:139] virtualization: kvm guest
	I0815 17:05:08.722699  385407 out.go:177] * [addons-703024] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:05:08.723959  385407 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:05:08.724034  385407 notify.go:220] Checking for updates...
	I0815 17:05:08.726421  385407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:08.727704  385407 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:05:08.728859  385407 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	I0815 17:05:08.730032  385407 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:05:08.731094  385407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:05:08.732211  385407 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:08.752582  385407 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:05:08.752702  385407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:08.800823  385407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-15 17:05:08.791208744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:05:08.800928  385407 docker.go:307] overlay module found
	I0815 17:05:08.802609  385407 out.go:177] * Using the docker driver based on user configuration
	I0815 17:05:08.803736  385407 start.go:297] selected driver: docker
	I0815 17:05:08.803758  385407 start.go:901] validating driver "docker" against <nil>
	I0815 17:05:08.803775  385407 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:05:08.804536  385407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:08.847960  385407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-15 17:05:08.838992575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:05:08.848128  385407 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:05:08.848335  385407 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:05:08.849891  385407 out.go:177] * Using Docker driver with root privileges
	I0815 17:05:08.851239  385407 cni.go:84] Creating CNI manager for ""
	I0815 17:05:08.851256  385407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 17:05:08.851268  385407 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:08.851345  385407 start.go:340] cluster config:
	{Name:addons-703024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-703024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:08.852767  385407 out.go:177] * Starting "addons-703024" primary control-plane node in "addons-703024" cluster
	I0815 17:05:08.853897  385407 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 17:05:08.854953  385407 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:05:08.856009  385407 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:05:08.856035  385407 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:05:08.856042  385407 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:05:08.856140  385407 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:08.856220  385407 preload.go:172] Found /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:05:08.856231  385407 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:05:08.856598  385407 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/config.json ...
	I0815 17:05:08.856628  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/config.json: {Name:mk1d0408945a591f5c5e1721189ffc9aa5843ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:08.872658  385407 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:05:08.872826  385407 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:05:08.872844  385407 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:05:08.872849  385407 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:05:08.872859  385407 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:05:08.872866  385407 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 17:05:21.435650  385407 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 17:05:21.435707  385407 cache.go:194] Successfully downloaded all kic artifacts
	I0815 17:05:21.435782  385407 start.go:360] acquireMachinesLock for addons-703024: {Name:mk4736efa8f9335340b5139086cb62f2d9137682 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:21.435878  385407 start.go:364] duration metric: took 76.734µs to acquireMachinesLock for "addons-703024"
	I0815 17:05:21.435905  385407 start.go:93] Provisioning new machine with config: &{Name:addons-703024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-703024 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:05:21.436006  385407 start.go:125] createHost starting for "" (driver="docker")
	I0815 17:05:21.529990  385407 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0815 17:05:21.530285  385407 start.go:159] libmachine.API.Create for "addons-703024" (driver="docker")
	I0815 17:05:21.530328  385407 client.go:168] LocalClient.Create starting
	I0815 17:05:21.530463  385407 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem
	I0815 17:05:21.572307  385407 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem
	I0815 17:05:21.646767  385407 cli_runner.go:164] Run: docker network inspect addons-703024 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0815 17:05:21.662561  385407 cli_runner.go:211] docker network inspect addons-703024 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0815 17:05:21.662637  385407 network_create.go:284] running [docker network inspect addons-703024] to gather additional debugging logs...
	I0815 17:05:21.662655  385407 cli_runner.go:164] Run: docker network inspect addons-703024
	W0815 17:05:21.677513  385407 cli_runner.go:211] docker network inspect addons-703024 returned with exit code 1
	I0815 17:05:21.677547  385407 network_create.go:287] error running [docker network inspect addons-703024]: docker network inspect addons-703024: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-703024 not found
	I0815 17:05:21.677574  385407 network_create.go:289] output of [docker network inspect addons-703024]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-703024 not found
	
	** /stderr **
	I0815 17:05:21.677667  385407 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:05:21.693179  385407 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cfa2e0}
	I0815 17:05:21.693238  385407 network_create.go:124] attempt to create docker network addons-703024 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0815 17:05:21.693311  385407 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-703024 addons-703024
	I0815 17:05:22.042022  385407 network_create.go:108] docker network addons-703024 192.168.49.0/24 created
	I0815 17:05:22.042054  385407 kic.go:121] calculated static IP "192.168.49.2" for the "addons-703024" container
	I0815 17:05:22.042126  385407 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0815 17:05:22.056896  385407 cli_runner.go:164] Run: docker volume create addons-703024 --label name.minikube.sigs.k8s.io=addons-703024 --label created_by.minikube.sigs.k8s.io=true
	I0815 17:05:22.158167  385407 oci.go:103] Successfully created a docker volume addons-703024
	I0815 17:05:22.158296  385407 cli_runner.go:164] Run: docker run --rm --name addons-703024-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703024 --entrypoint /usr/bin/test -v addons-703024:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0815 17:05:26.629800  385407 cli_runner.go:217] Completed: docker run --rm --name addons-703024-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703024 --entrypoint /usr/bin/test -v addons-703024:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib: (4.471459636s)
	I0815 17:05:26.629847  385407 oci.go:107] Successfully prepared a docker volume addons-703024
	I0815 17:05:26.629874  385407 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:05:26.629896  385407 kic.go:194] Starting extracting preloaded images to volume ...
	I0815 17:05:26.629956  385407 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-703024:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0815 17:05:30.949085  385407 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-703024:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (4.319078798s)
	I0815 17:05:30.949117  385407 kic.go:203] duration metric: took 4.319216387s to extract preloaded images to volume ...
	W0815 17:05:30.949237  385407 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0815 17:05:30.949365  385407 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0815 17:05:30.992011  385407 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-703024 --name addons-703024 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703024 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-703024 --network addons-703024 --ip 192.168.49.2 --volume addons-703024:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0815 17:05:31.278014  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Running}}
	I0815 17:05:31.295586  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:31.313998  385407 cli_runner.go:164] Run: docker exec addons-703024 stat /var/lib/dpkg/alternatives/iptables
	I0815 17:05:31.354108  385407 oci.go:144] the created container "addons-703024" has a running status.
	I0815 17:05:31.354144  385407 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa...
	I0815 17:05:31.438637  385407 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0815 17:05:31.459103  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:31.475486  385407 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0815 17:05:31.475513  385407 kic_runner.go:114] Args: [docker exec --privileged addons-703024 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0815 17:05:31.523622  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:31.539534  385407 machine.go:93] provisionDockerMachine start ...
	I0815 17:05:31.539628  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:31.557781  385407 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:31.558067  385407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:31.558092  385407 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:05:31.558766  385407 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60584->127.0.0.1:33138: read: connection reset by peer
	I0815 17:05:34.688086  385407 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-703024
	
	I0815 17:05:34.688119  385407 ubuntu.go:169] provisioning hostname "addons-703024"
	I0815 17:05:34.688176  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:34.704503  385407 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:34.704732  385407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:34.704753  385407 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-703024 && echo "addons-703024" | sudo tee /etc/hostname
	I0815 17:05:34.847139  385407 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-703024
	
	I0815 17:05:34.847216  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:34.863824  385407 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:34.864014  385407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:34.864032  385407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-703024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-703024/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-703024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:05:34.992496  385407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:05:34.992528  385407 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19450-377193/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-377193/.minikube}
	I0815 17:05:34.992597  385407 ubuntu.go:177] setting up certificates
	I0815 17:05:34.992613  385407 provision.go:84] configureAuth start
	I0815 17:05:34.992679  385407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703024
	I0815 17:05:35.008878  385407 provision.go:143] copyHostCerts
	I0815 17:05:35.008962  385407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem (1078 bytes)
	I0815 17:05:35.009145  385407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem (1123 bytes)
	I0815 17:05:35.009246  385407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem (1675 bytes)
	I0815 17:05:35.009330  385407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem org=jenkins.addons-703024 san=[127.0.0.1 192.168.49.2 addons-703024 localhost minikube]
	I0815 17:05:35.080384  385407 provision.go:177] copyRemoteCerts
	I0815 17:05:35.080450  385407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:05:35.080498  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.097041  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:35.192838  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:05:35.214352  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 17:05:35.235713  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:05:35.256396  385407 provision.go:87] duration metric: took 263.758764ms to configureAuth
	I0815 17:05:35.256434  385407 ubuntu.go:193] setting minikube options for container-runtime
	I0815 17:05:35.256648  385407 config.go:182] Loaded profile config "addons-703024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:05:35.256785  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.273282  385407 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:35.273466  385407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:35.273488  385407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:05:35.489141  385407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:05:35.489170  385407 machine.go:96] duration metric: took 3.949611229s to provisionDockerMachine
	I0815 17:05:35.489185  385407 client.go:171] duration metric: took 13.958847531s to LocalClient.Create
	I0815 17:05:35.489207  385407 start.go:167] duration metric: took 13.958924192s to libmachine.API.Create "addons-703024"
	I0815 17:05:35.489223  385407 start.go:293] postStartSetup for "addons-703024" (driver="docker")
	I0815 17:05:35.489239  385407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:05:35.489312  385407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:05:35.489364  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.505632  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:35.600949  385407 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:05:35.603743  385407 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 17:05:35.603771  385407 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 17:05:35.603779  385407 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 17:05:35.603787  385407 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 17:05:35.603798  385407 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/addons for local assets ...
	I0815 17:05:35.603852  385407 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/files for local assets ...
	I0815 17:05:35.603879  385407 start.go:296] duration metric: took 114.648796ms for postStartSetup
	I0815 17:05:35.604138  385407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703024
	I0815 17:05:35.620301  385407 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/config.json ...
	I0815 17:05:35.620569  385407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:05:35.620631  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.637047  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:35.725167  385407 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 17:05:35.729116  385407 start.go:128] duration metric: took 14.293097266s to createHost
	I0815 17:05:35.729138  385407 start.go:83] releasing machines lock for "addons-703024", held for 14.293248247s
	I0815 17:05:35.729201  385407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703024
	I0815 17:05:35.745135  385407 ssh_runner.go:195] Run: cat /version.json
	I0815 17:05:35.745178  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.745217  385407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:05:35.745290  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.762779  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:35.762953  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:35.924964  385407 ssh_runner.go:195] Run: systemctl --version
	I0815 17:05:35.928969  385407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:05:36.064297  385407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 17:05:36.068431  385407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:05:36.085549  385407 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 17:05:36.085624  385407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:05:36.110561  385407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0815 17:05:36.110595  385407 start.go:495] detecting cgroup driver to use...
	I0815 17:05:36.110632  385407 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 17:05:36.110703  385407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:05:36.124282  385407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:05:36.133697  385407 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:05:36.133756  385407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:05:36.145434  385407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:05:36.157661  385407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:05:36.232863  385407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:05:36.312986  385407 docker.go:233] disabling docker service ...
	I0815 17:05:36.313042  385407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:05:36.329581  385407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:05:36.339542  385407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:05:36.412453  385407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:05:36.489001  385407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:05:36.499108  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:05:36.513099  385407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:05:36.513154  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.521711  385407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:05:36.521776  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.530130  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.538275  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.546441  385407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:05:36.554023  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.562209  385407 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.575622  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.583882  385407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:05:36.591010  385407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:05:36.598095  385407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:05:36.671655  385407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:05:36.776793  385407 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:05:36.776873  385407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:05:36.780024  385407 start.go:563] Will wait 60s for crictl version
	I0815 17:05:36.780069  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:05:36.782824  385407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:05:36.815202  385407 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 17:05:36.815285  385407 ssh_runner.go:195] Run: crio --version
	I0815 17:05:36.851080  385407 ssh_runner.go:195] Run: crio --version
	I0815 17:05:36.887218  385407 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 17:05:36.888375  385407 cli_runner.go:164] Run: docker network inspect addons-703024 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:05:36.904036  385407 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 17:05:36.907383  385407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:05:36.917066  385407 kubeadm.go:883] updating cluster {Name:addons-703024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-703024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:05:36.917205  385407 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:05:36.917250  385407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:05:36.977292  385407 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:05:36.977315  385407 crio.go:433] Images already preloaded, skipping extraction
	I0815 17:05:36.977358  385407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:05:37.008155  385407 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:05:37.008178  385407 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:05:37.008186  385407 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0815 17:05:37.008296  385407 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-703024 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-703024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:05:37.008363  385407 ssh_runner.go:195] Run: crio config
	I0815 17:05:37.047478  385407 cni.go:84] Creating CNI manager for ""
	I0815 17:05:37.047496  385407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 17:05:37.047506  385407 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:05:37.047528  385407 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-703024 NodeName:addons-703024 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:05:37.047666  385407 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-703024"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:05:37.047725  385407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:05:37.055886  385407 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:05:37.055942  385407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 17:05:37.063534  385407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 17:05:37.078589  385407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:05:37.093782  385407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0815 17:05:37.109397  385407 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0815 17:05:37.112248  385407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:05:37.121333  385407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:05:37.196832  385407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:05:37.208591  385407 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024 for IP: 192.168.49.2
	I0815 17:05:37.208627  385407 certs.go:194] generating shared ca certs ...
	I0815 17:05:37.208649  385407 certs.go:226] acquiring lock for ca certs: {Name:mkf196aaefcb61003123eeb327e0f1a70bf4bfe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.208783  385407 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key
	I0815 17:05:37.263047  385407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt ...
	I0815 17:05:37.263078  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt: {Name:mk399af234c069e3ed75cc5132478ed5f424a637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.263232  385407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key ...
	I0815 17:05:37.263242  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key: {Name:mk7670345ad8e9e93de5e51cbe26f447c50a667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.263312  385407 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key
	I0815 17:05:37.349644  385407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt ...
	I0815 17:05:37.349675  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt: {Name:mkb84e4ed90993f652fd97864a136f02e4db5580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.349849  385407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key ...
	I0815 17:05:37.349861  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key: {Name:mkd3a1fc36993b42851f4c114648a631c92b494d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.349932  385407 certs.go:256] generating profile certs ...
	I0815 17:05:37.349991  385407 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.key
	I0815 17:05:37.350006  385407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt with IP's: []
	I0815 17:05:37.836177  385407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt ...
	I0815 17:05:37.836212  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: {Name:mkd168136aba0e51c304406ace01a3841be06252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.836376  385407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.key ...
	I0815 17:05:37.836387  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.key: {Name:mk8263c3e99d11398fd40554bb2162bc05a08af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.836456  385407 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key.49a1a781
	I0815 17:05:37.836474  385407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt.49a1a781 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0815 17:05:37.969541  385407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt.49a1a781 ...
	I0815 17:05:37.969571  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt.49a1a781: {Name:mk86cb345bf5335803b3d8217df84c7d593c372a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.969734  385407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key.49a1a781 ...
	I0815 17:05:37.969748  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key.49a1a781: {Name:mk278b909fa90b694010d5b20a202adb7f1f7246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.969824  385407 certs.go:381] copying /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt.49a1a781 -> /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt
	I0815 17:05:37.969894  385407 certs.go:385] copying /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key.49a1a781 -> /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key
	I0815 17:05:37.969940  385407 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.key
	I0815 17:05:37.969957  385407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.crt with IP's: []
	I0815 17:05:38.137436  385407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.crt ...
	I0815 17:05:38.137468  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.crt: {Name:mk86d754f7b46fdf2d05689b8fe52bba57601036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:38.137626  385407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.key ...
	I0815 17:05:38.137639  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.key: {Name:mkda8d1a5d469f7adedc152e763b78617c8ff925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:38.137806  385407 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 17:05:38.137844  385407 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem (1078 bytes)
	I0815 17:05:38.137869  385407 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:05:38.137893  385407 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem (1675 bytes)
	I0815 17:05:38.138562  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:05:38.160252  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:05:38.180486  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:05:38.200523  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 17:05:38.220382  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 17:05:38.240194  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 17:05:38.260446  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:05:38.280684  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 17:05:38.300643  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:05:38.320898  385407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:05:38.335760  385407 ssh_runner.go:195] Run: openssl version
	I0815 17:05:38.340511  385407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:05:38.348445  385407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:05:38.351373  385407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:05:38.351425  385407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:05:38.357400  385407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:05:38.365013  385407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:05:38.367679  385407 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:05:38.367756  385407 kubeadm.go:392] StartCluster: {Name:addons-703024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-703024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:38.367840  385407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 17:05:38.367904  385407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:05:38.399527  385407 cri.go:89] found id: ""
	I0815 17:05:38.399605  385407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 17:05:38.407319  385407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 17:05:38.414900  385407 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0815 17:05:38.414962  385407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 17:05:38.422359  385407 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 17:05:38.422381  385407 kubeadm.go:157] found existing configuration files:
	
	I0815 17:05:38.422416  385407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 17:05:38.429626  385407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 17:05:38.429685  385407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 17:05:38.436592  385407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 17:05:38.443481  385407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 17:05:38.443535  385407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 17:05:38.450422  385407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 17:05:38.457607  385407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 17:05:38.457653  385407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 17:05:38.464584  385407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 17:05:38.471713  385407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 17:05:38.471754  385407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 17:05:38.478594  385407 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0815 17:05:38.512244  385407 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 17:05:38.512315  385407 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 17:05:38.530381  385407 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0815 17:05:38.530455  385407 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0815 17:05:38.530540  385407 kubeadm.go:310] OS: Linux
	I0815 17:05:38.530629  385407 kubeadm.go:310] CGROUPS_CPU: enabled
	I0815 17:05:38.530704  385407 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0815 17:05:38.530771  385407 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0815 17:05:38.530823  385407 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0815 17:05:38.530900  385407 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0815 17:05:38.530982  385407 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0815 17:05:38.531058  385407 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0815 17:05:38.531127  385407 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0815 17:05:38.531195  385407 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0815 17:05:38.580174  385407 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 17:05:38.580346  385407 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 17:05:38.580495  385407 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 17:05:38.586419  385407 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 17:05:38.589924  385407 out.go:235]   - Generating certificates and keys ...
	I0815 17:05:38.590015  385407 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 17:05:38.590071  385407 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 17:05:38.823342  385407 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 17:05:39.014648  385407 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 17:05:39.129731  385407 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 17:05:39.446496  385407 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 17:05:39.755320  385407 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 17:05:39.755471  385407 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-703024 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 17:05:39.966187  385407 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 17:05:39.966343  385407 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-703024 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 17:05:40.040875  385407 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 17:05:40.160458  385407 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 17:05:40.250890  385407 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 17:05:40.250966  385407 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 17:05:40.434931  385407 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 17:05:40.588956  385407 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 17:05:40.650170  385407 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 17:05:40.807576  385407 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 17:05:41.057971  385407 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 17:05:41.058417  385407 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 17:05:41.060795  385407 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 17:05:41.062961  385407 out.go:235]   - Booting up control plane ...
	I0815 17:05:41.063077  385407 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 17:05:41.063181  385407 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 17:05:41.063261  385407 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 17:05:41.072088  385407 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 17:05:41.077177  385407 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 17:05:41.077234  385407 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 17:05:41.149166  385407 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 17:05:41.149314  385407 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 17:05:42.150756  385407 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001583708s
	I0815 17:05:42.150857  385407 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 17:05:46.152997  385407 kubeadm.go:310] [api-check] The API server is healthy after 4.002260813s
	I0815 17:05:46.163644  385407 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 17:05:46.173987  385407 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 17:05:46.190920  385407 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 17:05:46.191162  385407 kubeadm.go:310] [mark-control-plane] Marking the node addons-703024 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 17:05:46.197933  385407 kubeadm.go:310] [bootstrap-token] Using token: krclci.kozi6o9ch4qso3c4
	I0815 17:05:46.199520  385407 out.go:235]   - Configuring RBAC rules ...
	I0815 17:05:46.199678  385407 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 17:05:46.202196  385407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 17:05:46.208004  385407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 17:05:46.210336  385407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 17:05:46.212437  385407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 17:05:46.216225  385407 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 17:05:46.559261  385407 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 17:05:46.980383  385407 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 17:05:47.558236  385407 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 17:05:47.559403  385407 kubeadm.go:310] 
	I0815 17:05:47.559488  385407 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 17:05:47.559498  385407 kubeadm.go:310] 
	I0815 17:05:47.559611  385407 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 17:05:47.559621  385407 kubeadm.go:310] 
	I0815 17:05:47.559668  385407 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 17:05:47.559753  385407 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 17:05:47.559820  385407 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 17:05:47.559829  385407 kubeadm.go:310] 
	I0815 17:05:47.559903  385407 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 17:05:47.559913  385407 kubeadm.go:310] 
	I0815 17:05:47.559971  385407 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 17:05:47.559981  385407 kubeadm.go:310] 
	I0815 17:05:47.560054  385407 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 17:05:47.560154  385407 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 17:05:47.560252  385407 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 17:05:47.560277  385407 kubeadm.go:310] 
	I0815 17:05:47.560398  385407 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 17:05:47.560522  385407 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 17:05:47.560532  385407 kubeadm.go:310] 
	I0815 17:05:47.560658  385407 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token krclci.kozi6o9ch4qso3c4 \
	I0815 17:05:47.560800  385407 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a342846b00061d7c3551c06e4f758c5edc3939c9da852e4d92590498b260c16a \
	I0815 17:05:47.560828  385407 kubeadm.go:310] 	--control-plane 
	I0815 17:05:47.560841  385407 kubeadm.go:310] 
	I0815 17:05:47.560962  385407 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 17:05:47.560974  385407 kubeadm.go:310] 
	I0815 17:05:47.561085  385407 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token krclci.kozi6o9ch4qso3c4 \
	I0815 17:05:47.561230  385407 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a342846b00061d7c3551c06e4f758c5edc3939c9da852e4d92590498b260c16a 
	I0815 17:05:47.563083  385407 kubeadm.go:310] W0815 17:05:38.509946    1298 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:05:47.563342  385407 kubeadm.go:310] W0815 17:05:38.510502    1298 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:05:47.563559  385407 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0815 17:05:47.563690  385407 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 17:05:47.563724  385407 cni.go:84] Creating CNI manager for ""
	I0815 17:05:47.563737  385407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 17:05:47.566230  385407 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 17:05:47.567449  385407 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 17:05:47.570965  385407 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 17:05:47.570980  385407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 17:05:47.586801  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 17:05:47.770652  385407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 17:05:47.770726  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:47.770766  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-703024 minikube.k8s.io/updated_at=2024_08_15T17_05_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=addons-703024 minikube.k8s.io/primary=true
	I0815 17:05:47.778317  385407 ops.go:34] apiserver oom_adj: -16
	I0815 17:05:47.862867  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:48.362934  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:48.863465  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:49.363857  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:49.863590  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:50.362898  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:50.863169  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:51.363185  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:51.424435  385407 kubeadm.go:1113] duration metric: took 3.653768566s to wait for elevateKubeSystemPrivileges
	I0815 17:05:51.424468  385407 kubeadm.go:394] duration metric: took 13.05672834s to StartCluster
	I0815 17:05:51.424485  385407 settings.go:142] acquiring lock: {Name:mke1aec41bab7354aae03597d79755a9c481f6a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:51.424619  385407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:05:51.424973  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/kubeconfig: {Name:mk661ec10a39902a1883ea9ee46c4be1d73fd858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:51.425140  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 17:05:51.425235  385407 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:05:51.425319  385407 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 17:05:51.425424  385407 addons.go:69] Setting yakd=true in profile "addons-703024"
	I0815 17:05:51.425441  385407 addons.go:69] Setting inspektor-gadget=true in profile "addons-703024"
	I0815 17:05:51.425465  385407 addons.go:234] Setting addon yakd=true in "addons-703024"
	I0815 17:05:51.425466  385407 addons.go:69] Setting ingress=true in profile "addons-703024"
	I0815 17:05:51.425483  385407 config.go:182] Loaded profile config "addons-703024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:05:51.425488  385407 addons.go:69] Setting ingress-dns=true in profile "addons-703024"
	I0815 17:05:51.425495  385407 addons.go:69] Setting helm-tiller=true in profile "addons-703024"
	I0815 17:05:51.425507  385407 addons.go:234] Setting addon ingress-dns=true in "addons-703024"
	I0815 17:05:51.425478  385407 addons.go:234] Setting addon inspektor-gadget=true in "addons-703024"
	I0815 17:05:51.425519  385407 addons.go:69] Setting metrics-server=true in profile "addons-703024"
	I0815 17:05:51.425529  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425533  385407 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-703024"
	I0815 17:05:51.425545  385407 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-703024"
	I0815 17:05:51.425555  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425556  385407 addons.go:69] Setting registry=true in profile "addons-703024"
	I0815 17:05:51.425566  385407 addons.go:69] Setting volumesnapshots=true in profile "addons-703024"
	I0815 17:05:51.425575  385407 addons.go:234] Setting addon registry=true in "addons-703024"
	I0815 17:05:51.425581  385407 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-703024"
	I0815 17:05:51.425588  385407 addons.go:234] Setting addon volumesnapshots=true in "addons-703024"
	I0815 17:05:51.425601  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425616  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425548  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425508  385407 addons.go:234] Setting addon ingress=true in "addons-703024"
	I0815 17:05:51.425738  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425898  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.425556  385407 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-703024"
	I0815 17:05:51.426004  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.426063  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.426071  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.426079  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.426121  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.426150  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.425521  385407 addons.go:234] Setting addon helm-tiller=true in "addons-703024"
	I0815 17:05:51.426321  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.426776  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.425567  385407 addons.go:234] Setting addon metrics-server=true in "addons-703024"
	I0815 17:05:51.425485  385407 addons.go:69] Setting default-storageclass=true in profile "addons-703024"
	I0815 17:05:51.425535  385407 addons.go:69] Setting volcano=true in profile "addons-703024"
	I0815 17:05:51.425475  385407 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-703024"
	I0815 17:05:51.425487  385407 addons.go:69] Setting cloud-spanner=true in profile "addons-703024"
	I0815 17:05:51.425507  385407 addons.go:69] Setting storage-provisioner=true in profile "addons-703024"
	I0815 17:05:51.426066  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.427159  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.427241  385407 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-703024"
	I0815 17:05:51.427302  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.427372  385407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-703024"
	I0815 17:05:51.427421  385407 addons.go:234] Setting addon cloud-spanner=true in "addons-703024"
	I0815 17:05:51.428204  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.426894  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.428430  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.428744  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.428748  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.427511  385407 addons.go:234] Setting addon volcano=true in "addons-703024"
	I0815 17:05:51.429128  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.427562  385407 addons.go:234] Setting addon storage-provisioner=true in "addons-703024"
	I0815 17:05:51.429178  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.429560  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.429592  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.425454  385407 addons.go:69] Setting gcp-auth=true in profile "addons-703024"
	I0815 17:05:51.431391  385407 mustload.go:65] Loading cluster: addons-703024
	I0815 17:05:51.431596  385407 config.go:182] Loaded profile config "addons-703024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:05:51.431876  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.427912  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.427542  385407 out.go:177] * Verifying Kubernetes components...
	I0815 17:05:51.443755  385407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:05:51.467558  385407 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-703024"
	I0815 17:05:51.467612  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.468210  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.468414  385407 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 17:05:51.468531  385407 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 17:05:51.469979  385407 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 17:05:51.470043  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 17:05:51.471169  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.470081  385407 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 17:05:51.471350  385407 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 17:05:51.471392  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.471091  385407 addons.go:234] Setting addon default-storageclass=true in "addons-703024"
	I0815 17:05:51.472464  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.473084  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.474111  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 17:05:51.474148  385407 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 17:05:51.474179  385407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:05:51.476230  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 17:05:51.476253  385407 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 17:05:51.476314  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.476636  385407 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 17:05:51.476654  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 17:05:51.476698  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.480182  385407 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 17:05:51.481587  385407 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 17:05:51.481662  385407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 17:05:51.484255  385407 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 17:05:51.484276  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 17:05:51.484349  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.484871  385407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:05:51.486160  385407 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 17:05:51.486510  385407 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 17:05:51.486528  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 17:05:51.486592  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.494703  385407 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 17:05:51.494726  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 17:05:51.494786  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.497031  385407 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 17:05:51.501549  385407 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 17:05:51.501579  385407 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 17:05:51.501650  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.503283  385407 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0815 17:05:51.506367  385407 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0815 17:05:51.506394  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0815 17:05:51.506458  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.514649  385407 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 17:05:51.515872  385407 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 17:05:51.515893  385407 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 17:05:51.515972  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	W0815 17:05:51.528185  385407 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 17:05:51.529295  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.531106  385407 out.go:177]   - Using image docker.io/busybox:stable
	I0815 17:05:51.532383  385407 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 17:05:51.533619  385407 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 17:05:51.533639  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 17:05:51.533696  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.535188  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 17:05:51.536682  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 17:05:51.537964  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 17:05:51.539717  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.539817  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.541226  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 17:05:51.542562  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 17:05:51.543845  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 17:05:51.545121  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 17:05:51.545743  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.546843  385407 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 17:05:51.547168  385407 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 17:05:51.547225  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.548319  385407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 17:05:51.549825  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 17:05:51.549938  385407 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:05:51.549962  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 17:05:51.550019  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.551028  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.551329  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 17:05:51.551348  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 17:05:51.551398  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.551403  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.569267  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.570543  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.582473  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.584797  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.586357  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.597566  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.601773  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.603225  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.607246  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	W0815 17:05:51.653436  385407 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 17:05:51.653475  385407 retry.go:31] will retry after 305.49033ms: ssh: handshake failed: EOF
	I0815 17:05:51.658732  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 17:05:51.754633  385407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:05:51.856692  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 17:05:51.958573  385407 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 17:05:51.958598  385407 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 17:05:51.958604  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 17:05:52.061057  385407 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 17:05:52.061092  385407 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 17:05:52.065872  385407 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 17:05:52.065901  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 17:05:52.153051  385407 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0815 17:05:52.153148  385407 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0815 17:05:52.155099  385407 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 17:05:52.155181  385407 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 17:05:52.155130  385407 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 17:05:52.155273  385407 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 17:05:52.155961  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:05:52.161212  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 17:05:52.166461  385407 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 17:05:52.166496  385407 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 17:05:52.257320  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 17:05:52.259558  385407 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 17:05:52.259638  385407 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 17:05:52.265416  385407 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 17:05:52.265508  385407 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0815 17:05:52.273344  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 17:05:52.353531  385407 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 17:05:52.353635  385407 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 17:05:52.360446  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:05:52.367522  385407 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 17:05:52.367608  385407 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 17:05:52.453336  385407 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 17:05:52.453435  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 17:05:52.472914  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 17:05:52.557086  385407 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:05:52.557182  385407 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 17:05:52.565246  385407 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 17:05:52.565276  385407 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 17:05:52.572975  385407 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 17:05:52.573054  385407 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 17:05:52.654248  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 17:05:52.654331  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 17:05:52.854050  385407 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 17:05:52.854079  385407 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 17:05:52.954195  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:05:52.955706  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 17:05:52.955777  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 17:05:52.957817  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 17:05:52.963997  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 17:05:52.964025  385407 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 17:05:52.967980  385407 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 17:05:52.968002  385407 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 17:05:53.156760  385407 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:05:53.156787  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 17:05:53.255040  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 17:05:53.255072  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 17:05:53.263861  385407 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 17:05:53.263892  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 17:05:53.359760  385407 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.700985691s)
	I0815 17:05:53.359800  385407 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0815 17:05:53.361060  385407 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.606393204s)
	I0815 17:05:53.361896  385407 node_ready.go:35] waiting up to 6m0s for node "addons-703024" to be "Ready" ...
	I0815 17:05:53.362108  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.505384387s)
	I0815 17:05:53.362350  385407 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 17:05:53.362366  385407 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 17:05:53.458270  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:05:53.660175  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 17:05:53.756392  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 17:05:53.756481  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 17:05:53.759818  385407 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 17:05:53.759899  385407 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 17:05:53.964938  385407 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-703024" context rescaled to 1 replicas
	I0815 17:05:54.176828  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 17:05:54.176911  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 17:05:54.258921  385407 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 17:05:54.258997  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 17:05:54.370736  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 17:05:54.370821  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 17:05:54.566434  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 17:05:54.866051  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 17:05:54.866147  385407 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 17:05:55.256980  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 17:05:55.257074  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 17:05:55.272458  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.313811215s)
	I0815 17:05:55.272594  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.116604221s)
	I0815 17:05:55.373596  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:05:55.474717  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 17:05:55.474748  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 17:05:55.675263  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 17:05:55.675293  385407 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 17:05:55.873052  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 17:05:57.464516  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.303212304s)
	I0815 17:05:57.464576  385407 addons.go:475] Verifying addon ingress=true in "addons-703024"
	I0815 17:05:57.464618  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.207196625s)
	I0815 17:05:57.464726  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.191301732s)
	I0815 17:05:57.464803  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.104274658s)
	I0815 17:05:57.464889  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.991890496s)
	I0815 17:05:57.464970  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.510743669s)
	I0815 17:05:57.464993  385407 addons.go:475] Verifying addon metrics-server=true in "addons-703024"
	I0815 17:05:57.465038  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.507194695s)
	I0815 17:05:57.465055  385407 addons.go:475] Verifying addon registry=true in "addons-703024"
	I0815 17:05:57.466452  385407 out.go:177] * Verifying ingress addon...
	I0815 17:05:57.467328  385407 out.go:177] * Verifying registry addon...
	I0815 17:05:57.468912  385407 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 17:05:57.469839  385407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 17:05:57.475723  385407 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 17:05:57.475782  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:57.475979  385407 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 17:05:57.476000  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:05:57.866039  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:05:57.973168  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:57.973804  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:05:58.474536  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:58.475219  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:05:58.488985  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.030604422s)
	W0815 17:05:58.489080  385407 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 17:05:58.489114  385407 retry.go:31] will retry after 259.352422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 17:05:58.489117  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.828858761s)
	I0815 17:05:58.489209  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.922670339s)
	I0815 17:05:58.490791  385407 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-703024 service yakd-dashboard -n yakd-dashboard
	
	I0815 17:05:58.748823  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:05:58.758298  385407 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 17:05:58.758369  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:58.780391  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:58.974027  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:58.975213  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:05:59.074453  385407 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 17:05:59.153987  385407 addons.go:234] Setting addon gcp-auth=true in "addons-703024"
	I0815 17:05:59.154088  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:59.154659  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:59.180754  385407 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 17:05:59.180811  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:59.200076  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:59.372825  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.499647938s)
	I0815 17:05:59.372868  385407 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-703024"
	I0815 17:05:59.374358  385407 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 17:05:59.376913  385407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 17:05:59.379446  385407 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 17:05:59.379468  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:05:59.473084  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:05:59.473558  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:59.880914  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:05:59.972352  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:59.973090  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:00.364910  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:06:00.380684  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:00.472629  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:00.472830  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:00.880230  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:00.972312  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:00.972377  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:01.380819  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:01.474597  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:01.474963  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:01.881716  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:01.972692  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:01.972900  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:02.203932  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.45504696s)
	I0815 17:06:02.203975  385407 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.023189594s)
	I0815 17:06:02.206164  385407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:06:02.207669  385407 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 17:06:02.208956  385407 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 17:06:02.208975  385407 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 17:06:02.255150  385407 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 17:06:02.255182  385407 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 17:06:02.273229  385407 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 17:06:02.273258  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 17:06:02.290404  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 17:06:02.365310  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:06:02.380048  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:02.472586  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:02.473137  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:02.868616  385407 addons.go:475] Verifying addon gcp-auth=true in "addons-703024"
	I0815 17:06:02.870658  385407 out.go:177] * Verifying gcp-auth addon...
	I0815 17:06:02.873177  385407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 17:06:02.877483  385407 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 17:06:02.877504  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:02.879343  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:02.972755  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:02.973176  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:03.376386  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:03.379610  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:03.472375  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:03.472838  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:03.876697  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:03.879839  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:03.972881  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:03.973038  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:04.365376  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:06:04.376741  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:04.379909  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:04.472968  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:04.472980  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:04.876537  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:04.879687  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:04.972794  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:04.972964  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:05.376933  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:05.379570  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:05.472710  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:05.472782  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:05.876423  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:05.879388  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:05.972455  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:05.972483  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:06.376271  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:06.379378  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:06.472409  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:06.472515  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:06.865250  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:06:06.877094  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:06.879207  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:06.972508  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:06.972901  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:07.375801  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:07.379993  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:07.472732  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:07.473326  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:07.877194  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:07.879399  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:07.972523  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:07.972533  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:08.376566  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:08.379395  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:08.472436  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:08.472498  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:08.865468  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:06:08.877937  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:08.879370  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:08.972172  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:08.972520  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:09.376491  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:09.379384  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:09.472348  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:09.472391  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:09.876354  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:09.879173  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:09.972670  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:09.973072  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:10.377768  385407 node_ready.go:49] node "addons-703024" has status "Ready":"True"
	I0815 17:06:10.377796  385407 node_ready.go:38] duration metric: took 17.015874306s for node "addons-703024" to be "Ready" ...
	I0815 17:06:10.377809  385407 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:06:10.378313  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:10.379194  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:10.461861  385407 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qkxj6" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:10.476140  385407 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 17:06:10.476167  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:10.476299  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:10.876640  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:10.880522  385407 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 17:06:10.880542  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:10.977230  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:10.977531  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:11.381698  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:11.381991  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:11.483169  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:11.483232  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:11.876995  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:11.880418  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:11.972806  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:11.973624  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:12.377303  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:12.380816  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:12.468162  385407 pod_ready.go:93] pod "coredns-6f6b679f8f-qkxj6" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.468190  385407 pod_ready.go:82] duration metric: took 2.006298074s for pod "coredns-6f6b679f8f-qkxj6" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.468214  385407 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.472443  385407 pod_ready.go:93] pod "etcd-addons-703024" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.472465  385407 pod_ready.go:82] duration metric: took 4.243953ms for pod "etcd-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.472476  385407 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.472999  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:12.473316  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:12.476235  385407 pod_ready.go:93] pod "kube-apiserver-addons-703024" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.476254  385407 pod_ready.go:82] duration metric: took 3.770872ms for pod "kube-apiserver-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.476265  385407 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.479893  385407 pod_ready.go:93] pod "kube-controller-manager-addons-703024" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.479909  385407 pod_ready.go:82] duration metric: took 3.637464ms for pod "kube-controller-manager-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.479919  385407 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nsvg6" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.483565  385407 pod_ready.go:93] pod "kube-proxy-nsvg6" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.483581  385407 pod_ready.go:82] duration metric: took 3.657002ms for pod "kube-proxy-nsvg6" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.483589  385407 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.866512  385407 pod_ready.go:93] pod "kube-scheduler-addons-703024" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.866533  385407 pod_ready.go:82] duration metric: took 382.938072ms for pod "kube-scheduler-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.866543  385407 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.876077  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:12.880688  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:12.973021  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:12.973195  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:13.377170  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:13.380489  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:13.472895  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:13.473315  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:13.876871  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:13.880455  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:13.972765  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:13.973309  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:14.376073  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:14.381145  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:14.472588  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:14.473017  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:14.873343  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:14.875842  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:14.880760  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:14.973106  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:14.973496  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:15.376211  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:15.381545  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:15.473921  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:15.474138  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:15.876045  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:15.881444  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:15.972824  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:15.972969  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:16.376000  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:16.380534  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:16.472830  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:16.473127  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:16.876764  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:16.881577  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:16.973181  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:16.973479  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:17.373018  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:17.376442  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:17.380904  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:17.473184  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:17.473413  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:17.875778  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:17.881218  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:17.973262  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:17.974224  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:18.376112  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:18.381056  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:18.477575  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:18.478004  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:18.876525  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:18.880573  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:18.972949  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:18.973222  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:19.376296  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:19.377598  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:19.381723  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:19.478360  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:19.478561  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:19.877172  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:19.880691  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:19.974224  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:19.974665  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:20.376815  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:20.455397  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:20.474960  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:20.475553  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:20.876368  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:20.882113  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:20.973570  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:20.974453  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:21.376121  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:21.381279  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:21.472914  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:21.472914  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:21.872111  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:21.877208  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:21.880448  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:21.973777  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:21.974499  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:22.375749  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:22.380427  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:22.472870  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:22.473029  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:22.876411  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:22.880361  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:22.973508  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:22.974991  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:23.375988  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:23.381204  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:23.473406  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:23.473545  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:23.877008  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:23.880043  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:23.977919  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:23.978279  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:24.371596  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:24.376096  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:24.380642  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:24.477603  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:24.477956  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:24.876504  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:24.881403  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:24.973373  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:24.973688  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:25.375859  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:25.381150  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:25.473703  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:25.474139  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:25.876519  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:25.880343  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:25.973092  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:25.973169  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:26.372183  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:26.376592  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:26.380393  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:26.473091  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:26.473471  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:26.875870  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:26.880899  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:26.973644  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:26.974702  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:27.376368  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:27.379825  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:27.477618  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:27.477889  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:27.875708  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:27.880876  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:27.976164  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:27.976637  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:28.372718  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:28.376619  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:28.380861  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:28.473718  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:28.474408  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:28.875703  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:28.880512  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:28.973104  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:28.973505  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:29.376268  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:29.380295  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:29.477602  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:29.478160  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:29.876408  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:29.880356  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:29.972697  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:29.973272  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:30.375755  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:30.382948  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:30.473095  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:30.473547  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:30.872754  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:30.877129  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:30.880010  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:30.977813  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:30.978122  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:31.375523  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:31.380086  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:31.473125  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:31.473131  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:31.876940  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:31.880836  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:31.973265  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:31.973483  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:32.376093  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:32.381571  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:32.473056  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:32.473087  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:32.876432  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:32.880824  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:32.973615  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:32.974365  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:33.373006  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:33.375721  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:33.380669  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:33.473066  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:33.473863  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:33.876419  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:33.880377  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:33.972771  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:33.973200  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:34.376359  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:34.380997  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:34.473295  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:34.473427  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:34.876918  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:34.881033  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:34.976601  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:34.977833  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:35.376200  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:35.381259  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:35.472993  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:35.473108  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:35.872611  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:35.875739  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:35.880831  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:35.972929  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:35.973399  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:36.376376  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:36.380369  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:36.473001  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:36.473368  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:36.875782  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:36.881903  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:36.972601  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:36.972856  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:37.376124  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:37.380737  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:37.473138  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:37.473411  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:37.872748  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:37.875948  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:37.881168  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:37.973343  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:37.973933  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:38.375829  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:38.380736  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:38.473039  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:38.473137  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:38.875442  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:38.880020  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:38.972763  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:38.972906  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:39.376281  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:39.379870  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:39.473059  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:39.473451  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:39.875820  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:39.880480  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:39.973121  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:39.973518  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:40.372534  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:40.376563  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:40.381685  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:40.473702  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:40.474161  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:40.962764  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:40.964332  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:40.978379  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:40.978719  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:41.377406  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:41.382217  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:41.473262  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:41.474226  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:41.876216  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:41.881830  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:41.973151  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:41.973567  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:42.372665  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:42.375914  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:42.381210  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:42.473121  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:42.473325  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:42.876328  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:42.881734  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:42.973397  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:42.974314  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:43.376321  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:43.381572  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:43.473106  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:43.473518  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:43.875601  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:43.880620  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:43.973590  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:43.973783  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:44.372874  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:44.375882  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:44.380802  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:44.473010  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:44.473094  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:44.876904  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:44.880920  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:44.972732  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:44.973066  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:45.376047  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:45.380937  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:45.473606  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:45.473818  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:45.876387  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:45.880172  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:45.973236  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:45.973299  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:46.376153  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:46.381160  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:46.472853  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:46.473599  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:46.872714  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:46.875812  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:46.881934  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:46.972948  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:46.973367  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:47.376247  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:47.381974  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:47.472982  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:47.473168  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:47.876742  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:47.881032  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:47.973087  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:47.973259  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:48.375987  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:48.380917  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:48.472648  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:48.472849  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:48.876836  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:48.880430  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:48.972776  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:48.973108  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:49.371890  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:49.376142  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:49.381131  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:49.472932  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:49.473088  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:49.875912  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:49.880633  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:49.973010  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:49.973590  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:50.376375  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:50.381864  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:50.473650  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:50.473681  385407 kapi.go:107] duration metric: took 53.003842114s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 17:06:50.876364  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:50.880037  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:50.972431  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:51.372064  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:51.376038  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:51.380816  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:51.473234  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:51.955476  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:51.967619  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:51.975809  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:52.457240  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:52.458091  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:52.473821  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:52.876467  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:52.880722  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:52.974148  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:53.374234  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:53.376308  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:53.456778  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:53.473557  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:53.876114  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:53.881491  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:53.973301  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:54.376408  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:54.383219  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:54.483324  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:54.901994  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:54.902767  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:54.973085  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:55.376242  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:55.381312  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:55.472763  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:55.872886  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:55.876240  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:55.880857  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:55.972541  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:56.376413  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:56.380517  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:56.473164  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:56.877692  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:56.880514  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:56.973136  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:57.376234  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:57.381638  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:57.473048  385407 kapi.go:107] duration metric: took 1m0.004133248s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 17:06:57.876235  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:57.881582  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:58.371527  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:58.375729  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:58.380484  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:58.875923  385407 kapi.go:107] duration metric: took 56.002743268s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 17:06:58.877442  385407 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-703024 cluster.
	I0815 17:06:58.880208  385407 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 17:06:58.881222  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:58.882895  385407 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 17:06:59.380848  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:59.880949  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:00.468423  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:00.469536  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:00.881064  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:01.381421  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:01.881598  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:02.381491  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:02.872405  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:02.880370  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:03.381274  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:03.880433  385407 kapi.go:107] duration metric: took 1m4.503523017s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 17:07:03.882151  385407 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, ingress-dns, storage-provisioner, helm-tiller, metrics-server, storage-provisioner-rancher, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0815 17:07:03.883336  385407 addons.go:510] duration metric: took 1m12.458019521s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass ingress-dns storage-provisioner helm-tiller metrics-server storage-provisioner-rancher inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0815 17:07:05.372078  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:07.372821  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:09.873087  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:12.372099  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:14.873691  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:17.371763  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:19.871985  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:21.371978  385407 pod_ready.go:93] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:21.372006  385407 pod_ready.go:82] duration metric: took 1m8.505451068s for pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:21.372018  385407 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xqk8k" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:21.375919  385407 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xqk8k" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:21.375939  385407 pod_ready.go:82] duration metric: took 3.912854ms for pod "nvidia-device-plugin-daemonset-xqk8k" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:21.375957  385407 pod_ready.go:39] duration metric: took 1m10.99813416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:07:21.375979  385407 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:07:21.376008  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:07:21.376062  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:07:21.410124  385407 cri.go:89] found id: "3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:21.410152  385407 cri.go:89] found id: ""
	I0815 17:07:21.410163  385407 logs.go:276] 1 containers: [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc]
	I0815 17:07:21.410217  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.413337  385407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 17:07:21.413389  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:07:21.445971  385407 cri.go:89] found id: "3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:21.446000  385407 cri.go:89] found id: ""
	I0815 17:07:21.446010  385407 logs.go:276] 1 containers: [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03]
	I0815 17:07:21.446066  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.449694  385407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 17:07:21.449753  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:07:21.484182  385407 cri.go:89] found id: "e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:21.484208  385407 cri.go:89] found id: ""
	I0815 17:07:21.484218  385407 logs.go:276] 1 containers: [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107]
	I0815 17:07:21.484271  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.487560  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:07:21.487613  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:07:21.520298  385407 cri.go:89] found id: "ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:21.520322  385407 cri.go:89] found id: ""
	I0815 17:07:21.520330  385407 logs.go:276] 1 containers: [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627]
	I0815 17:07:21.520380  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.523524  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:07:21.523591  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:07:21.556415  385407 cri.go:89] found id: "a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:21.556437  385407 cri.go:89] found id: ""
	I0815 17:07:21.556446  385407 logs.go:276] 1 containers: [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c]
	I0815 17:07:21.556489  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.559643  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:07:21.559696  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:07:21.594625  385407 cri.go:89] found id: "71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:21.594647  385407 cri.go:89] found id: ""
	I0815 17:07:21.594655  385407 logs.go:276] 1 containers: [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f]
	I0815 17:07:21.594706  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.598115  385407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 17:07:21.598181  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:07:21.630686  385407 cri.go:89] found id: "3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:21.630708  385407 cri.go:89] found id: ""
	I0815 17:07:21.630716  385407 logs.go:276] 1 containers: [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358]
	I0815 17:07:21.630757  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.633874  385407 logs.go:123] Gathering logs for etcd [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03] ...
	I0815 17:07:21.633896  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:21.683490  385407 logs.go:123] Gathering logs for coredns [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107] ...
	I0815 17:07:21.683521  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:21.717823  385407 logs.go:123] Gathering logs for kube-proxy [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c] ...
	I0815 17:07:21.717850  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:21.749631  385407 logs.go:123] Gathering logs for kube-controller-manager [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f] ...
	I0815 17:07:21.749657  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:21.806137  385407 logs.go:123] Gathering logs for kindnet [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358] ...
	I0815 17:07:21.806171  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:21.843758  385407 logs.go:123] Gathering logs for dmesg ...
	I0815 17:07:21.843785  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:07:21.869010  385407 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:07:21.869042  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:07:21.965521  385407 logs.go:123] Gathering logs for kube-scheduler [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627] ...
	I0815 17:07:21.965552  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:22.007157  385407 logs.go:123] Gathering logs for CRI-O ...
	I0815 17:07:22.007190  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 17:07:22.086492  385407 logs.go:123] Gathering logs for container status ...
	I0815 17:07:22.086531  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:07:22.126668  385407 logs.go:123] Gathering logs for kubelet ...
	I0815 17:07:22.126733  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:07:22.190335  385407 logs.go:123] Gathering logs for kube-apiserver [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc] ...
	I0815 17:07:22.190372  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:24.734652  385407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:07:24.748234  385407 api_server.go:72] duration metric: took 1m33.322956981s to wait for apiserver process to appear ...
	I0815 17:07:24.748258  385407 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:07:24.748301  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:07:24.748351  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:07:24.780350  385407 cri.go:89] found id: "3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:24.780376  385407 cri.go:89] found id: ""
	I0815 17:07:24.780388  385407 logs.go:276] 1 containers: [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc]
	I0815 17:07:24.780441  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.783624  385407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 17:07:24.783696  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:07:24.815446  385407 cri.go:89] found id: "3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:24.815466  385407 cri.go:89] found id: ""
	I0815 17:07:24.815476  385407 logs.go:276] 1 containers: [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03]
	I0815 17:07:24.815527  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.818638  385407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 17:07:24.818704  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:07:24.851543  385407 cri.go:89] found id: "e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:24.851562  385407 cri.go:89] found id: ""
	I0815 17:07:24.851576  385407 logs.go:276] 1 containers: [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107]
	I0815 17:07:24.851633  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.854745  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:07:24.854799  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:07:24.886958  385407 cri.go:89] found id: "ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:24.886982  385407 cri.go:89] found id: ""
	I0815 17:07:24.886992  385407 logs.go:276] 1 containers: [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627]
	I0815 17:07:24.887043  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.890269  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:07:24.890320  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:07:24.923133  385407 cri.go:89] found id: "a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:24.923154  385407 cri.go:89] found id: ""
	I0815 17:07:24.923162  385407 logs.go:276] 1 containers: [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c]
	I0815 17:07:24.923207  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.926544  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:07:24.926614  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:07:24.958401  385407 cri.go:89] found id: "71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:24.958425  385407 cri.go:89] found id: ""
	I0815 17:07:24.958435  385407 logs.go:276] 1 containers: [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f]
	I0815 17:07:24.958487  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.961717  385407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 17:07:24.961772  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:07:24.994751  385407 cri.go:89] found id: "3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:24.994771  385407 cri.go:89] found id: ""
	I0815 17:07:24.994778  385407 logs.go:276] 1 containers: [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358]
	I0815 17:07:24.994819  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.998255  385407 logs.go:123] Gathering logs for kubelet ...
	I0815 17:07:24.998278  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:07:25.053616  385407 logs.go:123] Gathering logs for dmesg ...
	I0815 17:07:25.053649  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:07:25.077668  385407 logs.go:123] Gathering logs for kube-apiserver [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc] ...
	I0815 17:07:25.077696  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:25.119884  385407 logs.go:123] Gathering logs for kube-controller-manager [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f] ...
	I0815 17:07:25.119914  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:25.177731  385407 logs.go:123] Gathering logs for kindnet [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358] ...
	I0815 17:07:25.177767  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:25.215502  385407 logs.go:123] Gathering logs for CRI-O ...
	I0815 17:07:25.215532  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 17:07:25.291742  385407 logs.go:123] Gathering logs for container status ...
	I0815 17:07:25.291780  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:07:25.332657  385407 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:07:25.332688  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:07:25.430198  385407 logs.go:123] Gathering logs for etcd [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03] ...
	I0815 17:07:25.430231  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:25.480647  385407 logs.go:123] Gathering logs for coredns [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107] ...
	I0815 17:07:25.480678  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:25.517396  385407 logs.go:123] Gathering logs for kube-scheduler [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627] ...
	I0815 17:07:25.517423  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:25.556595  385407 logs.go:123] Gathering logs for kube-proxy [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c] ...
	I0815 17:07:25.556623  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:28.089700  385407 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 17:07:28.094210  385407 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 17:07:28.095164  385407 api_server.go:141] control plane version: v1.31.0
	I0815 17:07:28.095188  385407 api_server.go:131] duration metric: took 3.346922594s to wait for apiserver health ...
	I0815 17:07:28.095196  385407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:07:28.095217  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:07:28.095267  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:07:28.128374  385407 cri.go:89] found id: "3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:28.128394  385407 cri.go:89] found id: ""
	I0815 17:07:28.128402  385407 logs.go:276] 1 containers: [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc]
	I0815 17:07:28.128447  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.131712  385407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 17:07:28.131760  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:07:28.164423  385407 cri.go:89] found id: "3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:28.164444  385407 cri.go:89] found id: ""
	I0815 17:07:28.164452  385407 logs.go:276] 1 containers: [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03]
	I0815 17:07:28.164499  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.167667  385407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 17:07:28.167736  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:07:28.201035  385407 cri.go:89] found id: "e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:28.201055  385407 cri.go:89] found id: ""
	I0815 17:07:28.201062  385407 logs.go:276] 1 containers: [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107]
	I0815 17:07:28.201116  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.204306  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:07:28.204367  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:07:28.238322  385407 cri.go:89] found id: "ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:28.238350  385407 cri.go:89] found id: ""
	I0815 17:07:28.238361  385407 logs.go:276] 1 containers: [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627]
	I0815 17:07:28.238421  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.241906  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:07:28.241961  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:07:28.277044  385407 cri.go:89] found id: "a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:28.277069  385407 cri.go:89] found id: ""
	I0815 17:07:28.277080  385407 logs.go:276] 1 containers: [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c]
	I0815 17:07:28.277140  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.280430  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:07:28.280484  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:07:28.313924  385407 cri.go:89] found id: "71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:28.313947  385407 cri.go:89] found id: ""
	I0815 17:07:28.313955  385407 logs.go:276] 1 containers: [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f]
	I0815 17:07:28.314000  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.317333  385407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 17:07:28.317388  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:07:28.350499  385407 cri.go:89] found id: "3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:28.350528  385407 cri.go:89] found id: ""
	I0815 17:07:28.350537  385407 logs.go:276] 1 containers: [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358]
	I0815 17:07:28.350592  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.354018  385407 logs.go:123] Gathering logs for kindnet [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358] ...
	I0815 17:07:28.354043  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:28.392887  385407 logs.go:123] Gathering logs for CRI-O ...
	I0815 17:07:28.392918  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 17:07:28.464780  385407 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:07:28.464819  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:07:28.564141  385407 logs.go:123] Gathering logs for kube-apiserver [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc] ...
	I0815 17:07:28.564173  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:28.608701  385407 logs.go:123] Gathering logs for coredns [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107] ...
	I0815 17:07:28.608733  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:28.644976  385407 logs.go:123] Gathering logs for kube-proxy [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c] ...
	I0815 17:07:28.645006  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:28.678355  385407 logs.go:123] Gathering logs for kube-controller-manager [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f] ...
	I0815 17:07:28.678386  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:28.735603  385407 logs.go:123] Gathering logs for kubelet ...
	I0815 17:07:28.735639  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:07:28.789160  385407 logs.go:123] Gathering logs for dmesg ...
	I0815 17:07:28.789196  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:07:28.813650  385407 logs.go:123] Gathering logs for etcd [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03] ...
	I0815 17:07:28.813685  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:28.862968  385407 logs.go:123] Gathering logs for kube-scheduler [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627] ...
	I0815 17:07:28.863002  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:28.906849  385407 logs.go:123] Gathering logs for container status ...
	I0815 17:07:28.906891  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:07:31.458443  385407 system_pods.go:59] 19 kube-system pods found
	I0815 17:07:31.458475  385407 system_pods.go:61] "coredns-6f6b679f8f-qkxj6" [34ae48c8-3d7b-4a77-8b13-13b8b10756f5] Running
	I0815 17:07:31.458480  385407 system_pods.go:61] "csi-hostpath-attacher-0" [7946b78c-985f-4cda-96a1-5c49966406a5] Running
	I0815 17:07:31.458484  385407 system_pods.go:61] "csi-hostpath-resizer-0" [82257fb0-be7a-4b13-9923-f696e123c103] Running
	I0815 17:07:31.458488  385407 system_pods.go:61] "csi-hostpathplugin-swhv8" [9149a811-b352-498e-805f-5de2e5a5a3ef] Running
	I0815 17:07:31.458491  385407 system_pods.go:61] "etcd-addons-703024" [c09918ca-f68f-4983-87a3-735fea26a55d] Running
	I0815 17:07:31.458496  385407 system_pods.go:61] "kindnet-c9vlm" [d5ebec8a-692a-46ac-aa63-8f88014adda2] Running
	I0815 17:07:31.458499  385407 system_pods.go:61] "kube-apiserver-addons-703024" [99caa053-eb58-456d-b8d5-a077317fb464] Running
	I0815 17:07:31.458503  385407 system_pods.go:61] "kube-controller-manager-addons-703024" [a7dc1511-bbdc-4663-ac6c-4b1e8b99087c] Running
	I0815 17:07:31.458506  385407 system_pods.go:61] "kube-ingress-dns-minikube" [e819b06b-0df3-45f9-a0de-807192f6978e] Running
	I0815 17:07:31.458509  385407 system_pods.go:61] "kube-proxy-nsvg6" [c5cafc62-f92a-4bee-a21e-ea2d555797e6] Running
	I0815 17:07:31.458512  385407 system_pods.go:61] "kube-scheduler-addons-703024" [73d8fa2f-1f2e-4d51-bcf7-bc3fa746cb84] Running
	I0815 17:07:31.458518  385407 system_pods.go:61] "metrics-server-8988944d9-flc8s" [1b94ea1a-e1d1-45d5-ba12-31457ddd2aab] Running
	I0815 17:07:31.458521  385407 system_pods.go:61] "nvidia-device-plugin-daemonset-xqk8k" [dd6bbf51-8737-4c2c-9596-00154e1ec52d] Running
	I0815 17:07:31.458525  385407 system_pods.go:61] "registry-6fb4cdfc84-jnqvt" [2df2b6d1-e4e8-4d1b-962b-574054625724] Running
	I0815 17:07:31.458528  385407 system_pods.go:61] "registry-proxy-4xk99" [7672bca9-2613-4a51-b743-107bdc30df7b] Running
	I0815 17:07:31.458533  385407 system_pods.go:61] "snapshot-controller-56fcc65765-5xmtm" [035efc15-66e6-4699-b4b2-f00adcaa95eb] Running
	I0815 17:07:31.458536  385407 system_pods.go:61] "snapshot-controller-56fcc65765-gqldd" [727e7a8f-5da3-4a26-b4bf-58402e345986] Running
	I0815 17:07:31.458542  385407 system_pods.go:61] "storage-provisioner" [dc5596da-d005-4633-893c-382dd8f2e28e] Running
	I0815 17:07:31.458545  385407 system_pods.go:61] "tiller-deploy-b48cc5f79-twgzw" [f0d20030-3d71-47ce-9f44-cf4f462d6c84] Running
	I0815 17:07:31.458551  385407 system_pods.go:74] duration metric: took 3.363349819s to wait for pod list to return data ...
	I0815 17:07:31.458563  385407 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:07:31.460776  385407 default_sa.go:45] found service account: "default"
	I0815 17:07:31.460797  385407 default_sa.go:55] duration metric: took 2.226514ms for default service account to be created ...
	I0815 17:07:31.460807  385407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:07:31.469667  385407 system_pods.go:86] 19 kube-system pods found
	I0815 17:07:31.469691  385407 system_pods.go:89] "coredns-6f6b679f8f-qkxj6" [34ae48c8-3d7b-4a77-8b13-13b8b10756f5] Running
	I0815 17:07:31.469696  385407 system_pods.go:89] "csi-hostpath-attacher-0" [7946b78c-985f-4cda-96a1-5c49966406a5] Running
	I0815 17:07:31.469700  385407 system_pods.go:89] "csi-hostpath-resizer-0" [82257fb0-be7a-4b13-9923-f696e123c103] Running
	I0815 17:07:31.469704  385407 system_pods.go:89] "csi-hostpathplugin-swhv8" [9149a811-b352-498e-805f-5de2e5a5a3ef] Running
	I0815 17:07:31.469707  385407 system_pods.go:89] "etcd-addons-703024" [c09918ca-f68f-4983-87a3-735fea26a55d] Running
	I0815 17:07:31.469712  385407 system_pods.go:89] "kindnet-c9vlm" [d5ebec8a-692a-46ac-aa63-8f88014adda2] Running
	I0815 17:07:31.469715  385407 system_pods.go:89] "kube-apiserver-addons-703024" [99caa053-eb58-456d-b8d5-a077317fb464] Running
	I0815 17:07:31.469719  385407 system_pods.go:89] "kube-controller-manager-addons-703024" [a7dc1511-bbdc-4663-ac6c-4b1e8b99087c] Running
	I0815 17:07:31.469724  385407 system_pods.go:89] "kube-ingress-dns-minikube" [e819b06b-0df3-45f9-a0de-807192f6978e] Running
	I0815 17:07:31.469727  385407 system_pods.go:89] "kube-proxy-nsvg6" [c5cafc62-f92a-4bee-a21e-ea2d555797e6] Running
	I0815 17:07:31.469733  385407 system_pods.go:89] "kube-scheduler-addons-703024" [73d8fa2f-1f2e-4d51-bcf7-bc3fa746cb84] Running
	I0815 17:07:31.469736  385407 system_pods.go:89] "metrics-server-8988944d9-flc8s" [1b94ea1a-e1d1-45d5-ba12-31457ddd2aab] Running
	I0815 17:07:31.469742  385407 system_pods.go:89] "nvidia-device-plugin-daemonset-xqk8k" [dd6bbf51-8737-4c2c-9596-00154e1ec52d] Running
	I0815 17:07:31.469746  385407 system_pods.go:89] "registry-6fb4cdfc84-jnqvt" [2df2b6d1-e4e8-4d1b-962b-574054625724] Running
	I0815 17:07:31.469751  385407 system_pods.go:89] "registry-proxy-4xk99" [7672bca9-2613-4a51-b743-107bdc30df7b] Running
	I0815 17:07:31.469755  385407 system_pods.go:89] "snapshot-controller-56fcc65765-5xmtm" [035efc15-66e6-4699-b4b2-f00adcaa95eb] Running
	I0815 17:07:31.469760  385407 system_pods.go:89] "snapshot-controller-56fcc65765-gqldd" [727e7a8f-5da3-4a26-b4bf-58402e345986] Running
	I0815 17:07:31.469766  385407 system_pods.go:89] "storage-provisioner" [dc5596da-d005-4633-893c-382dd8f2e28e] Running
	I0815 17:07:31.469772  385407 system_pods.go:89] "tiller-deploy-b48cc5f79-twgzw" [f0d20030-3d71-47ce-9f44-cf4f462d6c84] Running
	I0815 17:07:31.469779  385407 system_pods.go:126] duration metric: took 8.965785ms to wait for k8s-apps to be running ...
	I0815 17:07:31.469786  385407 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:07:31.469835  385407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:07:31.481045  385407 system_svc.go:56] duration metric: took 11.252737ms WaitForService to wait for kubelet
	I0815 17:07:31.481070  385407 kubeadm.go:582] duration metric: took 1m40.05579674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:07:31.481091  385407 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:07:31.483938  385407 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:07:31.483967  385407 node_conditions.go:123] node cpu capacity is 8
	I0815 17:07:31.483984  385407 node_conditions.go:105] duration metric: took 2.886832ms to run NodePressure ...
	I0815 17:07:31.483997  385407 start.go:241] waiting for startup goroutines ...
	I0815 17:07:31.484011  385407 start.go:246] waiting for cluster config update ...
	I0815 17:07:31.484035  385407 start.go:255] writing updated cluster config ...
	I0815 17:07:31.484348  385407 ssh_runner.go:195] Run: rm -f paused
	I0815 17:07:31.534562  385407 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 17:07:31.537462  385407 out.go:177] * Done! kubectl is now configured to use "addons-703024" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 17:11:03 addons-703024 crio[1033]: time="2024-08-15 17:11:03.559590952Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=09fb6591-2e07-460c-a925-89cca8eaa814 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:11:03 addons-703024 crio[1033]: time="2024-08-15 17:11:03.560259650Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=98659e39-65b8-4f67-8a13-39df4d135059 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:11:03 addons-703024 crio[1033]: time="2024-08-15 17:11:03.561005987Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=98659e39-65b8-4f67-8a13-39df4d135059 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:11:03 addons-703024 crio[1033]: time="2024-08-15 17:11:03.561682548Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-snj2m/hello-world-app" id=b3b09e6d-27ba-403f-ad9d-98fd84a7612d name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 17:11:03 addons-703024 crio[1033]: time="2024-08-15 17:11:03.561781338Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 15 17:11:03 addons-703024 crio[1033]: time="2024-08-15 17:11:03.577686759Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7eb166ac3b48fe3de1ab08090bdcb5c3c264fcb97396a833bfd562752d4eb9fb/merged/etc/passwd: no such file or directory"
	Aug 15 17:11:03 addons-703024 crio[1033]: time="2024-08-15 17:11:03.577721063Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7eb166ac3b48fe3de1ab08090bdcb5c3c264fcb97396a833bfd562752d4eb9fb/merged/etc/group: no such file or directory"
	Aug 15 17:11:03 addons-703024 crio[1033]: time="2024-08-15 17:11:03.615800630Z" level=info msg="Created container 4085bf7aee9cfcf072efc8aacc13afbeebf4a638dd45f58a38885f4c9cd359bc: default/hello-world-app-55bf9c44b4-snj2m/hello-world-app" id=b3b09e6d-27ba-403f-ad9d-98fd84a7612d name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 17:11:03 addons-703024 crio[1033]: time="2024-08-15 17:11:03.616690276Z" level=info msg="Starting container: 4085bf7aee9cfcf072efc8aacc13afbeebf4a638dd45f58a38885f4c9cd359bc" id=c82806a2-586a-4b07-bff8-a446bb44708b name=/runtime.v1.RuntimeService/StartContainer
	Aug 15 17:11:03 addons-703024 crio[1033]: time="2024-08-15 17:11:03.659635398Z" level=info msg="Started container" PID=11494 containerID=4085bf7aee9cfcf072efc8aacc13afbeebf4a638dd45f58a38885f4c9cd359bc description=default/hello-world-app-55bf9c44b4-snj2m/hello-world-app id=c82806a2-586a-4b07-bff8-a446bb44708b name=/runtime.v1.RuntimeService/StartContainer sandboxID=e5ed20a3b12db1e85c39aff1275fc15f555a3be8695f0f46e11ec84a4e3629a6
	Aug 15 17:11:04 addons-703024 crio[1033]: time="2024-08-15 17:11:04.763210718Z" level=info msg="Stopping container: c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3 (timeout: 2s)" id=0c93c6d9-a750-4866-9bd2-d54b4bed6833 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.769281378Z" level=warning msg="Stopping container c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=0c93c6d9-a750-4866-9bd2-d54b4bed6833 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 17:11:06 addons-703024 conmon[6082]: conmon c22f268eef9dfa76de71 <ninfo>: container 6099 exited with status 137
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.901049684Z" level=info msg="Stopped container c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3: ingress-nginx/ingress-nginx-controller-7559cbf597-gwb2j/controller" id=0c93c6d9-a750-4866-9bd2-d54b4bed6833 name=/runtime.v1.RuntimeService/StopContainer
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.901565713Z" level=info msg="Stopping pod sandbox: f878ace8a058203facb2add220fa77f876ec82f1febe14512489d13b4a49a653" id=e1282220-ffb3-4540-a1a5-3f1b6257c3ae name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.904753097Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-AKBYHZXKIFLULUUC - [0:0]\n:KUBE-HP-Y6D5FXH7EWDUJMUT - [0:0]\n-X KUBE-HP-AKBYHZXKIFLULUUC\n-X KUBE-HP-Y6D5FXH7EWDUJMUT\nCOMMIT\n"
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.906039478Z" level=info msg="Closing host port tcp:80"
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.906073218Z" level=info msg="Closing host port tcp:443"
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.907650251Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.907672194Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.907804384Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7559cbf597-gwb2j Namespace:ingress-nginx ID:f878ace8a058203facb2add220fa77f876ec82f1febe14512489d13b4a49a653 UID:a1822f52-6bae-49db-b9e5-b161fa51cb6f NetNS:/var/run/netns/a5471104-38f9-46b7-8057-b5e8966454d8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.907917904Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7559cbf597-gwb2j from CNI network \"kindnet\" (type=ptp)"
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.945934238Z" level=info msg="Stopped pod sandbox: f878ace8a058203facb2add220fa77f876ec82f1febe14512489d13b4a49a653" id=e1282220-ffb3-4540-a1a5-3f1b6257c3ae name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:07 addons-703024 crio[1033]: time="2024-08-15 17:11:07.156925026Z" level=info msg="Removing container: c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3" id=3dff452b-555c-45bc-9117-132f11d37388 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 17:11:07 addons-703024 crio[1033]: time="2024-08-15 17:11:07.170967246Z" level=info msg="Removed container c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3: ingress-nginx/ingress-nginx-controller-7559cbf597-gwb2j/controller" id=3dff452b-555c-45bc-9117-132f11d37388 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4085bf7aee9cf       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        8 seconds ago       Running             hello-world-app           0                   e5ed20a3b12db       hello-world-app-55bf9c44b4-snj2m
	7ecc6adc40013       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   f4934699c8ef7       nginx
	2a9c8bcf784b3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   dd584d9ef798a       busybox
	689321e5039a2       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago       Exited              patch                     3                   95b91c9934d00       ingress-nginx-admission-patch-gswvm
	5ca7783051242       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   0df8d50c66141       ingress-nginx-admission-create-b729r
	f9ecaf2bf81c5       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   0654c245bdc8a       metrics-server-8988944d9-flc8s
	e91e474418831       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   ea09e370c8832       coredns-6f6b679f8f-qkxj6
	2367ce375da91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   10d3bce2278c3       storage-provisioner
	3cad0bae577bb       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                           5 minutes ago       Running             kindnet-cni               0                   b827c5f30f7ae       kindnet-c9vlm
	a2610cc2f65a0       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   2c5ece15c945e       kube-proxy-nsvg6
	ebb1bbdb3320c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   de4bf6c6026a6       kube-scheduler-addons-703024
	71100fb2e4a17       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   d5be8c5a21aaa       kube-controller-manager-addons-703024
	3c5f0d2c0cdcd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   3dad7d545c672       etcd-addons-703024
	3b76e391faf0b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   f8c1a94595bf8       kube-apiserver-addons-703024
	
	
	==> coredns [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107] <==
	[INFO] 10.244.0.18:44457 - 45960 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067958s
	[INFO] 10.244.0.18:41997 - 2442 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004328647s
	[INFO] 10.244.0.18:41997 - 17286 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004745026s
	[INFO] 10.244.0.18:54351 - 12249 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005476729s
	[INFO] 10.244.0.18:54351 - 60380 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006174456s
	[INFO] 10.244.0.18:38182 - 3648 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005495164s
	[INFO] 10.244.0.18:38182 - 5959 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006003871s
	[INFO] 10.244.0.18:34511 - 43369 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077113s
	[INFO] 10.244.0.18:34511 - 6762 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000135304s
	[INFO] 10.244.0.21:53443 - 15524 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188799s
	[INFO] 10.244.0.21:43106 - 30021 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000242695s
	[INFO] 10.244.0.21:33523 - 58160 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125607s
	[INFO] 10.244.0.21:60433 - 13007 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017985s
	[INFO] 10.244.0.21:57274 - 41431 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085854s
	[INFO] 10.244.0.21:34080 - 31342 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148649s
	[INFO] 10.244.0.21:41356 - 29279 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005300217s
	[INFO] 10.244.0.21:38023 - 38694 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005509356s
	[INFO] 10.244.0.21:60440 - 42289 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005143971s
	[INFO] 10.244.0.21:47892 - 44613 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006812477s
	[INFO] 10.244.0.21:36159 - 61670 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005681369s
	[INFO] 10.244.0.21:49914 - 37324 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006294609s
	[INFO] 10.244.0.21:39400 - 26494 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00078324s
	[INFO] 10.244.0.21:52668 - 23496 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000865943s
	[INFO] 10.244.0.26:47241 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000220749s
	[INFO] 10.244.0.26:42791 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000119979s
	
	
	==> describe nodes <==
	Name:               addons-703024
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-703024
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=addons-703024
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_05_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-703024
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:05:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-703024
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:11:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:08:49 +0000   Thu, 15 Aug 2024 17:05:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:08:49 +0000   Thu, 15 Aug 2024 17:05:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:08:49 +0000   Thu, 15 Aug 2024 17:05:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:08:49 +0000   Thu, 15 Aug 2024 17:06:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-703024
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 1aa33dd6d4c249a48c60190f74f2479d
	  System UUID:                bf551e8d-2b73-4bbf-8d69-9efc34772b05
	  Boot ID:                    2d86d768-5fa6-4bed-a8b9-fa4131d6b0e8
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  default                     hello-world-app-55bf9c44b4-snj2m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 coredns-6f6b679f8f-qkxj6                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m19s
	  kube-system                 etcd-addons-703024                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m25s
	  kube-system                 kindnet-c9vlm                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m19s
	  kube-system                 kube-apiserver-addons-703024             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-addons-703024    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-nsvg6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-scheduler-addons-703024             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 metrics-server-8988944d9-flc8s           100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         5m16s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m15s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node addons-703024 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node addons-703024 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m30s (x7 over 5m30s)  kubelet          Node addons-703024 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m25s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m25s                  kubelet          Node addons-703024 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m25s                  kubelet          Node addons-703024 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m25s                  kubelet          Node addons-703024 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m20s                  node-controller  Node addons-703024 event: Registered Node addons-703024 in Controller
	  Normal   NodeReady                5m1s                   kubelet          Node addons-703024 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 02 42 7e dc ac 84 02 42 c0 a8 5e 02 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b84f812507c4
	[  +0.000003] ll header: 00000000: 02 42 7e dc ac 84 02 42 c0 a8 5e 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b84f812507c4
	[  +0.000002] ll header: 00000000: 02 42 7e dc ac 84 02 42 c0 a8 5e 02 08 00
	[Aug15 16:15] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-12dfa1aa7ae6
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-12dfa1aa7ae6
	[  +0.000005] ll header: 00000000: 02 42 9e 55 12 5a 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 9e 55 12 5a 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-12dfa1aa7ae6
	[  +0.000001] ll header: 00000000: 02 42 9e 55 12 5a 02 42 c0 a8 55 02 08 00
	[Aug15 17:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[  +1.027553] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[  +2.015829] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[  +4.191667] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[Aug15 17:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[ +16.126812] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[ +33.277609] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	
	
	==> etcd [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03] <==
	{"level":"info","ts":"2024-08-15T17:05:54.854364Z","caller":"traceutil/trace.go:171","msg":"trace[681308291] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:416; }","duration":"187.683267ms","start":"2024-08-15T17:05:54.666665Z","end":"2024-08-15T17:05:54.854348Z","steps":["trace[681308291] 'agreement among raft nodes before linearized reading'  (duration: 187.454614ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:54.854237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.105941ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:05:54.854814Z","caller":"traceutil/trace.go:171","msg":"trace[1328195484] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:416; }","duration":"190.681149ms","start":"2024-08-15T17:05:54.664120Z","end":"2024-08-15T17:05:54.854801Z","steps":["trace[1328195484] 'agreement among raft nodes before linearized reading'  (duration: 190.088655ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:54.854312Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.389175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3144"}
	{"level":"info","ts":"2024-08-15T17:05:54.855197Z","caller":"traceutil/trace.go:171","msg":"trace[2066636726] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:416; }","duration":"190.267686ms","start":"2024-08-15T17:05:54.664918Z","end":"2024-08-15T17:05:54.855185Z","steps":["trace[2066636726] 'agreement among raft nodes before linearized reading'  (duration: 189.343554ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:54.871244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.876512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:140"}
	{"level":"info","ts":"2024-08-15T17:05:54.876838Z","caller":"traceutil/trace.go:171","msg":"trace[1225753751] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:423; }","duration":"111.470103ms","start":"2024-08-15T17:05:54.765347Z","end":"2024-08-15T17:05:54.876817Z","steps":["trace[1225753751] 'agreement among raft nodes before linearized reading'  (duration: 104.390693ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:05:55.258919Z","caller":"traceutil/trace.go:171","msg":"trace[1448782441] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"182.470761ms","start":"2024-08-15T17:05:55.076429Z","end":"2024-08-15T17:05:55.258899Z","steps":["trace[1448782441] 'process raft request'  (duration: 178.385492ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:05:55.258984Z","caller":"traceutil/trace.go:171","msg":"trace[1592769233] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"105.759591ms","start":"2024-08-15T17:05:55.153206Z","end":"2024-08-15T17:05:55.258966Z","steps":["trace[1592769233] 'process raft request'  (duration: 105.338276ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:05:55.259081Z","caller":"traceutil/trace.go:171","msg":"trace[395357935] linearizableReadLoop","detail":"{readStateIndex:448; appliedIndex:446; }","duration":"105.956359ms","start":"2024-08-15T17:05:55.153115Z","end":"2024-08-15T17:05:55.259071Z","steps":["trace[395357935] 'read index received'  (duration: 1.570649ms)","trace[395357935] 'applied index is now lower than readState.Index'  (duration: 104.384991ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T17:05:55.259152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.02251ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-nsvg6\" ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2024-08-15T17:05:55.260173Z","caller":"traceutil/trace.go:171","msg":"trace[2126902612] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-nsvg6; range_end:; response_count:1; response_revision:439; }","duration":"107.052358ms","start":"2024-08-15T17:05:55.153109Z","end":"2024-08-15T17:05:55.260161Z","steps":["trace[2126902612] 'agreement among raft nodes before linearized reading'  (duration: 105.98928ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:55.260341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.029767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-15T17:05:55.260395Z","caller":"traceutil/trace.go:171","msg":"trace[844264442] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:439; }","duration":"107.091451ms","start":"2024-08-15T17:05:55.153295Z","end":"2024-08-15T17:05:55.260386Z","steps":["trace[844264442] 'agreement among raft nodes before linearized reading'  (duration: 107.003589ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:05:55.259178Z","caller":"traceutil/trace.go:171","msg":"trace[882611892] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"105.815786ms","start":"2024-08-15T17:05:55.153356Z","end":"2024-08-15T17:05:55.259172Z","steps":["trace[882611892] 'process raft request'  (duration: 105.253992ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:55.260919Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.863087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:05:55.259203Z","caller":"traceutil/trace.go:171","msg":"trace[242474293] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"101.056527ms","start":"2024-08-15T17:05:55.158141Z","end":"2024-08-15T17:05:55.259198Z","steps":["trace[242474293] 'process raft request'  (duration: 100.50698ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:55.264368Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.900635ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3454"}
	{"level":"info","ts":"2024-08-15T17:05:55.265865Z","caller":"traceutil/trace.go:171","msg":"trace[496978317] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:439; }","duration":"112.39925ms","start":"2024-08-15T17:05:55.153452Z","end":"2024-08-15T17:05:55.265851Z","steps":["trace[496978317] 'agreement among raft nodes before linearized reading'  (duration: 110.711625ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:05:55.266064Z","caller":"traceutil/trace.go:171","msg":"trace[91201434] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:439; }","duration":"110.009431ms","start":"2024-08-15T17:05:55.156045Z","end":"2024-08-15T17:05:55.266054Z","steps":["trace[91201434] 'agreement among raft nodes before linearized reading'  (duration: 104.849343ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:07:00.465409Z","caller":"traceutil/trace.go:171","msg":"trace[363298197] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"102.704028ms","start":"2024-08-15T17:07:00.362681Z","end":"2024-08-15T17:07:00.465385Z","steps":["trace[363298197] 'process raft request'  (duration: 84.870409ms)","trace[363298197] 'compare'  (duration: 17.678593ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T17:07:00.465506Z","caller":"traceutil/trace.go:171","msg":"trace[1027133607] transaction","detail":"{read_only:false; response_revision:1237; number_of_response:1; }","duration":"100.730515ms","start":"2024-08-15T17:07:00.364755Z","end":"2024-08-15T17:07:00.465486Z","steps":["trace[1027133607] 'process raft request'  (duration: 100.576118ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:07:06.088456Z","caller":"traceutil/trace.go:171","msg":"trace[1404637144] transaction","detail":"{read_only:false; response_revision:1257; number_of_response:1; }","duration":"112.660711ms","start":"2024-08-15T17:07:05.975767Z","end":"2024-08-15T17:07:06.088428Z","steps":["trace[1404637144] 'process raft request'  (duration: 111.880074ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:08:41.397318Z","caller":"traceutil/trace.go:171","msg":"trace[1016833019] transaction","detail":"{read_only:false; response_revision:1906; number_of_response:1; }","duration":"106.662872ms","start":"2024-08-15T17:08:41.290633Z","end":"2024-08-15T17:08:41.397296Z","steps":["trace[1016833019] 'process raft request'  (duration: 106.027956ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:08:41.397352Z","caller":"traceutil/trace.go:171","msg":"trace[994476116] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1907; }","duration":"106.148462ms","start":"2024-08-15T17:08:41.291185Z","end":"2024-08-15T17:08:41.397334Z","steps":["trace[994476116] 'process raft request'  (duration: 105.992369ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:11:12 up  1:53,  0 users,  load average: 0.18, 0.43, 0.32
	Linux addons-703024 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358] <==
	E0815 17:10:10.234674       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 17:10:10.253698       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:10:10.253732       1 main.go:299] handling current node
	I0815 17:10:20.253411       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:10:20.253447       1 main.go:299] handling current node
	W0815 17:10:26.979225       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 17:10:26.979258       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 17:10:30.253354       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:10:30.253386       1 main.go:299] handling current node
	W0815 17:10:30.926354       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:10:30.926385       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 17:10:40.253397       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:10:40.253439       1 main.go:299] handling current node
	I0815 17:10:50.253684       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:10:50.253726       1 main.go:299] handling current node
	I0815 17:11:00.253731       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:11:00.253774       1 main.go:299] handling current node
	W0815 17:11:05.107202       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:11:05.107237       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0815 17:11:08.608268       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 17:11:08.608305       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0815 17:11:10.113326       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 17:11:10.113374       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 17:11:10.253720       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:11:10.253762       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc] <==
	E0815 17:07:58.227636       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0815 17:07:58.232817       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0815 17:08:13.234265       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0815 17:08:13.871541       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0815 17:08:14.675872       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:40078: read: connection reset by peer
	E0815 17:08:14.680897       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54356: use of closed network connection
	I0815 17:08:17.917759       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.196.218"}
	I0815 17:08:40.690176       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 17:08:40.865091       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.95.210"}
	I0815 17:08:41.185500       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 17:08:42.400462       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0815 17:08:43.066391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:08:43.066447       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:08:43.173728       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:08:43.173783       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:08:43.254718       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:08:43.254765       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:08:43.264029       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:08:43.264078       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:08:43.267449       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:08:43.267876       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0815 17:08:44.255030       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0815 17:08:44.268130       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0815 17:08:44.464760       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0815 17:11:01.806310       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.107.246"}
	
	
	==> kube-controller-manager [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f] <==
	W0815 17:09:44.143659       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:09:44.143707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:09:52.136010       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:09:52.136063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:10:02.605213       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:10:02.605268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:10:04.027040       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:10:04.027084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:10:33.369192       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:10:33.369234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:10:34.233306       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:10:34.233358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:10:37.640425       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:10:37.640474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:10:47.182137       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:10:47.182184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 17:11:01.605581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.481682ms"
	I0815 17:11:01.609902       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.265453ms"
	I0815 17:11:01.610531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.062µs"
	I0815 17:11:01.615652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="30.877µs"
	I0815 17:11:03.715807       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0815 17:11:03.717289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="10.716µs"
	I0815 17:11:03.719525       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0815 17:11:04.158466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.260217ms"
	I0815 17:11:04.158551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.392µs"
	
	
	==> kube-proxy [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c] <==
	I0815 17:05:55.364792       1 server_linux.go:66] "Using iptables proxy"
	I0815 17:05:56.072374       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0815 17:05:56.077051       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:05:56.473069       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0815 17:05:56.473237       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:05:56.476933       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:05:56.477666       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:05:56.477692       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:05:56.482443       1 config.go:197] "Starting service config controller"
	I0815 17:05:56.482467       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:05:56.482508       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:05:56.482512       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:05:56.482845       1 config.go:326] "Starting node config controller"
	I0815 17:05:56.482852       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:05:56.652874       1 shared_informer.go:320] Caches are synced for node config
	I0815 17:05:56.652923       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:05:56.653035       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627] <==
	W0815 17:05:44.575955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:05:44.575979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.409182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:05:45.409220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.498688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 17:05:45.498735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.527119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:05:45.527167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.554826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:05:45.554872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.625783       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 17:05:45.625832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.649032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 17:05:45.649076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.668376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:05:45.668425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.677941       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 17:05:45.677977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.726301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 17:05:45.726344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.739847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 17:05:45.739917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.863217       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 17:05:45.863262       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 17:05:48.173234       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 17:11:01 addons-703024 kubelet[1646]: I0815 17:11:01.779692    1646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmbpk\" (UniqueName: \"kubernetes.io/projected/d7beb9ea-8e43-41a5-b081-a8bde1a47e6e-kube-api-access-tmbpk\") pod \"hello-world-app-55bf9c44b4-snj2m\" (UID: \"d7beb9ea-8e43-41a5-b081-a8bde1a47e6e\") " pod="default/hello-world-app-55bf9c44b4-snj2m"
	Aug 15 17:11:02 addons-703024 kubelet[1646]: I0815 17:11:02.686082    1646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxmrv\" (UniqueName: \"kubernetes.io/projected/e819b06b-0df3-45f9-a0de-807192f6978e-kube-api-access-qxmrv\") pod \"e819b06b-0df3-45f9-a0de-807192f6978e\" (UID: \"e819b06b-0df3-45f9-a0de-807192f6978e\") "
	Aug 15 17:11:02 addons-703024 kubelet[1646]: I0815 17:11:02.688236    1646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e819b06b-0df3-45f9-a0de-807192f6978e-kube-api-access-qxmrv" (OuterVolumeSpecName: "kube-api-access-qxmrv") pod "e819b06b-0df3-45f9-a0de-807192f6978e" (UID: "e819b06b-0df3-45f9-a0de-807192f6978e"). InnerVolumeSpecName "kube-api-access-qxmrv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 17:11:02 addons-703024 kubelet[1646]: I0815 17:11:02.786541    1646 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qxmrv\" (UniqueName: \"kubernetes.io/projected/e819b06b-0df3-45f9-a0de-807192f6978e-kube-api-access-qxmrv\") on node \"addons-703024\" DevicePath \"\""
	Aug 15 17:11:03 addons-703024 kubelet[1646]: I0815 17:11:03.140871    1646 scope.go:117] "RemoveContainer" containerID="23b322a9c69db4a1a9116eca708582969a948cd2ed972e196b95fa8141b111f9"
	Aug 15 17:11:03 addons-703024 kubelet[1646]: I0815 17:11:03.156943    1646 scope.go:117] "RemoveContainer" containerID="23b322a9c69db4a1a9116eca708582969a948cd2ed972e196b95fa8141b111f9"
	Aug 15 17:11:03 addons-703024 kubelet[1646]: E0815 17:11:03.157364    1646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b322a9c69db4a1a9116eca708582969a948cd2ed972e196b95fa8141b111f9\": container with ID starting with 23b322a9c69db4a1a9116eca708582969a948cd2ed972e196b95fa8141b111f9 not found: ID does not exist" containerID="23b322a9c69db4a1a9116eca708582969a948cd2ed972e196b95fa8141b111f9"
	Aug 15 17:11:03 addons-703024 kubelet[1646]: I0815 17:11:03.157400    1646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b322a9c69db4a1a9116eca708582969a948cd2ed972e196b95fa8141b111f9"} err="failed to get container status \"23b322a9c69db4a1a9116eca708582969a948cd2ed972e196b95fa8141b111f9\": rpc error: code = NotFound desc = could not find container \"23b322a9c69db4a1a9116eca708582969a948cd2ed972e196b95fa8141b111f9\": container with ID starting with 23b322a9c69db4a1a9116eca708582969a948cd2ed972e196b95fa8141b111f9 not found: ID does not exist"
	Aug 15 17:11:04 addons-703024 kubelet[1646]: I0815 17:11:04.153248    1646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-snj2m" podStartSLOduration=1.5621804080000001 podStartE2EDuration="3.153225027s" podCreationTimestamp="2024-08-15 17:11:01 +0000 UTC" firstStartedPulling="2024-08-15 17:11:01.96876252 +0000 UTC m=+315.213825129" lastFinishedPulling="2024-08-15 17:11:03.559807139 +0000 UTC m=+316.804869748" observedRunningTime="2024-08-15 17:11:04.153008551 +0000 UTC m=+317.398071178" watchObservedRunningTime="2024-08-15 17:11:04.153225027 +0000 UTC m=+317.398287654"
	Aug 15 17:11:04 addons-703024 kubelet[1646]: I0815 17:11:04.863496    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b918a2-bded-4f14-83fb-ffaf9a662c51" path="/var/lib/kubelet/pods/22b918a2-bded-4f14-83fb-ffaf9a662c51/volumes"
	Aug 15 17:11:04 addons-703024 kubelet[1646]: I0815 17:11:04.863943    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83bfa305-c996-4179-bf94-f76df7678eef" path="/var/lib/kubelet/pods/83bfa305-c996-4179-bf94-f76df7678eef/volumes"
	Aug 15 17:11:04 addons-703024 kubelet[1646]: I0815 17:11:04.864238    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e819b06b-0df3-45f9-a0de-807192f6978e" path="/var/lib/kubelet/pods/e819b06b-0df3-45f9-a0de-807192f6978e/volumes"
	Aug 15 17:11:06 addons-703024 kubelet[1646]: E0815 17:11:06.998475    1646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741866998243873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:11:06 addons-703024 kubelet[1646]: E0815 17:11:06.998508    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741866998243873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:11:07 addons-703024 kubelet[1646]: I0815 17:11:07.113723    1646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1822f52-6bae-49db-b9e5-b161fa51cb6f-webhook-cert\") pod \"a1822f52-6bae-49db-b9e5-b161fa51cb6f\" (UID: \"a1822f52-6bae-49db-b9e5-b161fa51cb6f\") "
	Aug 15 17:11:07 addons-703024 kubelet[1646]: I0815 17:11:07.113770    1646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qbf9\" (UniqueName: \"kubernetes.io/projected/a1822f52-6bae-49db-b9e5-b161fa51cb6f-kube-api-access-9qbf9\") pod \"a1822f52-6bae-49db-b9e5-b161fa51cb6f\" (UID: \"a1822f52-6bae-49db-b9e5-b161fa51cb6f\") "
	Aug 15 17:11:07 addons-703024 kubelet[1646]: I0815 17:11:07.115606    1646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1822f52-6bae-49db-b9e5-b161fa51cb6f-kube-api-access-9qbf9" (OuterVolumeSpecName: "kube-api-access-9qbf9") pod "a1822f52-6bae-49db-b9e5-b161fa51cb6f" (UID: "a1822f52-6bae-49db-b9e5-b161fa51cb6f"). InnerVolumeSpecName "kube-api-access-9qbf9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 17:11:07 addons-703024 kubelet[1646]: I0815 17:11:07.115689    1646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1822f52-6bae-49db-b9e5-b161fa51cb6f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a1822f52-6bae-49db-b9e5-b161fa51cb6f" (UID: "a1822f52-6bae-49db-b9e5-b161fa51cb6f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 15 17:11:07 addons-703024 kubelet[1646]: I0815 17:11:07.155323    1646 scope.go:117] "RemoveContainer" containerID="c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3"
	Aug 15 17:11:07 addons-703024 kubelet[1646]: I0815 17:11:07.171217    1646 scope.go:117] "RemoveContainer" containerID="c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3"
	Aug 15 17:11:07 addons-703024 kubelet[1646]: E0815 17:11:07.171541    1646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3\": container with ID starting with c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3 not found: ID does not exist" containerID="c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3"
	Aug 15 17:11:07 addons-703024 kubelet[1646]: I0815 17:11:07.171586    1646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3"} err="failed to get container status \"c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3\": rpc error: code = NotFound desc = could not find container \"c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3\": container with ID starting with c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3 not found: ID does not exist"
	Aug 15 17:11:07 addons-703024 kubelet[1646]: I0815 17:11:07.214850    1646 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1822f52-6bae-49db-b9e5-b161fa51cb6f-webhook-cert\") on node \"addons-703024\" DevicePath \"\""
	Aug 15 17:11:07 addons-703024 kubelet[1646]: I0815 17:11:07.214880    1646 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9qbf9\" (UniqueName: \"kubernetes.io/projected/a1822f52-6bae-49db-b9e5-b161fa51cb6f-kube-api-access-9qbf9\") on node \"addons-703024\" DevicePath \"\""
	Aug 15 17:11:08 addons-703024 kubelet[1646]: I0815 17:11:08.863804    1646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1822f52-6bae-49db-b9e5-b161fa51cb6f" path="/var/lib/kubelet/pods/a1822f52-6bae-49db-b9e5-b161fa51cb6f/volumes"
	
	
	==> storage-provisioner [2367ce375da91bbe5b92ba7e6ed79bebfc4f04ff85717728ddd65239f23388bc] <==
	I0815 17:06:11.402385       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 17:06:11.456627       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 17:06:11.456681       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 17:06:11.465445       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 17:06:11.465625       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-703024_1c0de0d6-d953-4030-88da-526a4eb6bff7!
	I0815 17:06:11.466374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ffb4816b-b285-4153-8f69-80ad7ec9bddb", APIVersion:"v1", ResourceVersion:"938", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-703024_1c0de0d6-d953-4030-88da-526a4eb6bff7 became leader
	I0815 17:06:11.566107       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-703024_1c0de0d6-d953-4030-88da-526a4eb6bff7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-703024 -n addons-703024
helpers_test.go:261: (dbg) Run:  kubectl --context addons-703024 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (331.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.402515ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-flc8s" [1b94ea1a-e1d1-45d5-ba12-31457ddd2aab] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003278767s
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (61.361096ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 2m19.939304862s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (75.476536ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 2m23.86370951s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (70.506706ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 2m27.883232214s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (64.369029ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 2m34.185577342s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (64.33745ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 2m47.631116471s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (63.581044ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 2m59.10478329s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (62.034255ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 3m13.219663724s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (63.790243ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 3m43.678905067s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (61.667464ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 4m14.103609949s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (63.353328ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 5m18.738712341s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (63.719706ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 6m37.107990851s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-703024 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-703024 top pods -n kube-system: exit status 1 (63.008825ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qkxj6, age: 7m43.413304839s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-703024
helpers_test.go:235: (dbg) docker inspect addons-703024:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f",
	        "Created": "2024-08-15T17:05:31.00634759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 386147,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T17:05:31.108071249Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:49d4702e5c94195d7796cb79f5fbc9d7cc584c1c41f3c58bf1694d1da009b2f6",
	        "ResolvConfPath": "/var/lib/docker/containers/2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f/hosts",
	        "LogPath": "/var/lib/docker/containers/2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f/2d94eb4aadd4eb2a872d1fdc10a162cfd2cae312c141d2eeede1b536377d509f-json.log",
	        "Name": "/addons-703024",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-703024:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-703024",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af4570059a0f0808481c40ea677a6be381ccd02833f96d974e8555f4e9622388-init/diff:/var/lib/docker/overlay2/debad26787101f2e0bd77abae2a4f62ccd76a5180cc196365483720250fb2357/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af4570059a0f0808481c40ea677a6be381ccd02833f96d974e8555f4e9622388/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af4570059a0f0808481c40ea677a6be381ccd02833f96d974e8555f4e9622388/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af4570059a0f0808481c40ea677a6be381ccd02833f96d974e8555f4e9622388/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-703024",
	                "Source": "/var/lib/docker/volumes/addons-703024/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-703024",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-703024",
	                "name.minikube.sigs.k8s.io": "addons-703024",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b0c24a6e73bd999708ab5ad9f98c76d95319fb0fb88fa8553446a35e7e83eb0",
	            "SandboxKey": "/var/run/docker/netns/6b0c24a6e73b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-703024": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1c3f8d471e58852380a4ac912f81ccc3ecb004bd521310a2ab761467bf472c1",
	                    "EndpointID": "2194ee0aa57dc96f114bf91e71e00a8ed99b086bb2586042eeff49fb75dbb5d0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-703024",
	                        "2d94eb4aadd4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-703024 -n addons-703024
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-703024 logs -n 25: (1.070235165s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-962475                                                                   | download-docker-962475 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-527485   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | binary-mirror-527485                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39117                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-527485                                                                     | binary-mirror-527485   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| addons  | enable dashboard -p                                                                         | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | addons-703024                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | addons-703024                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-703024 --wait=true                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:07 UTC | 15 Aug 24 17:07 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:07 UTC | 15 Aug 24 17:07 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-703024 ssh cat                                                                       | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:07 UTC | 15 Aug 24 17:07 UTC |
	|         | /opt/local-path-provisioner/pvc-50d57a12-86e5-43f7-b121-a6d8b09e9508_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:07 UTC | 15 Aug 24 17:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-703024 ip                                                                            | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | -p addons-703024                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | addons-703024                                                                               |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | -p addons-703024                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-703024 addons                                                                        | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | addons-703024                                                                               |                        |         |         |                     |                     |
	| addons  | addons-703024 addons                                                                        | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC | 15 Aug 24 17:08 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-703024 ssh curl -s                                                                   | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-703024 ip                                                                            | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-703024 addons disable                                                                | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-703024 addons                                                                        | addons-703024          | jenkins | v1.33.1 | 15 Aug 24 17:13 UTC | 15 Aug 24 17:13 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:05:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:05:08.718581  385407 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:08.718720  385407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:08.718730  385407 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:08.718734  385407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:08.719067  385407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:05:08.719775  385407 out.go:352] Setting JSON to false
	I0815 17:05:08.720701  385407 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6461,"bootTime":1723735048,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:05:08.720761  385407 start.go:139] virtualization: kvm guest
	I0815 17:05:08.722699  385407 out.go:177] * [addons-703024] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:05:08.723959  385407 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:05:08.724034  385407 notify.go:220] Checking for updates...
	I0815 17:05:08.726421  385407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:08.727704  385407 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:05:08.728859  385407 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	I0815 17:05:08.730032  385407 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:05:08.731094  385407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:05:08.732211  385407 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:08.752582  385407 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:05:08.752702  385407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:08.800823  385407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-15 17:05:08.791208744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:05:08.800928  385407 docker.go:307] overlay module found
	I0815 17:05:08.802609  385407 out.go:177] * Using the docker driver based on user configuration
	I0815 17:05:08.803736  385407 start.go:297] selected driver: docker
	I0815 17:05:08.803758  385407 start.go:901] validating driver "docker" against <nil>
	I0815 17:05:08.803775  385407 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:05:08.804536  385407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:08.847960  385407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-15 17:05:08.838992575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:05:08.848128  385407 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:05:08.848335  385407 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:05:08.849891  385407 out.go:177] * Using Docker driver with root privileges
	I0815 17:05:08.851239  385407 cni.go:84] Creating CNI manager for ""
	I0815 17:05:08.851256  385407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 17:05:08.851268  385407 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:08.851345  385407 start.go:340] cluster config:
	{Name:addons-703024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-703024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:08.852767  385407 out.go:177] * Starting "addons-703024" primary control-plane node in "addons-703024" cluster
	I0815 17:05:08.853897  385407 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 17:05:08.854953  385407 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:05:08.856009  385407 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:05:08.856035  385407 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:05:08.856042  385407 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:05:08.856140  385407 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:08.856220  385407 preload.go:172] Found /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:05:08.856231  385407 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:05:08.856598  385407 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/config.json ...
	I0815 17:05:08.856628  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/config.json: {Name:mk1d0408945a591f5c5e1721189ffc9aa5843ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:08.872658  385407 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:05:08.872826  385407 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:05:08.872844  385407 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:05:08.872849  385407 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:05:08.872859  385407 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:05:08.872866  385407 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 17:05:21.435650  385407 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 17:05:21.435707  385407 cache.go:194] Successfully downloaded all kic artifacts
	I0815 17:05:21.435782  385407 start.go:360] acquireMachinesLock for addons-703024: {Name:mk4736efa8f9335340b5139086cb62f2d9137682 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:21.435878  385407 start.go:364] duration metric: took 76.734µs to acquireMachinesLock for "addons-703024"
	I0815 17:05:21.435905  385407 start.go:93] Provisioning new machine with config: &{Name:addons-703024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-703024 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:05:21.436006  385407 start.go:125] createHost starting for "" (driver="docker")
	I0815 17:05:21.529990  385407 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0815 17:05:21.530285  385407 start.go:159] libmachine.API.Create for "addons-703024" (driver="docker")
	I0815 17:05:21.530328  385407 client.go:168] LocalClient.Create starting
	I0815 17:05:21.530463  385407 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem
	I0815 17:05:21.572307  385407 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem
	I0815 17:05:21.646767  385407 cli_runner.go:164] Run: docker network inspect addons-703024 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0815 17:05:21.662561  385407 cli_runner.go:211] docker network inspect addons-703024 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0815 17:05:21.662637  385407 network_create.go:284] running [docker network inspect addons-703024] to gather additional debugging logs...
	I0815 17:05:21.662655  385407 cli_runner.go:164] Run: docker network inspect addons-703024
	W0815 17:05:21.677513  385407 cli_runner.go:211] docker network inspect addons-703024 returned with exit code 1
	I0815 17:05:21.677547  385407 network_create.go:287] error running [docker network inspect addons-703024]: docker network inspect addons-703024: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-703024 not found
	I0815 17:05:21.677574  385407 network_create.go:289] output of [docker network inspect addons-703024]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-703024 not found
	
	** /stderr **
	I0815 17:05:21.677667  385407 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:05:21.693179  385407 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cfa2e0}
	I0815 17:05:21.693238  385407 network_create.go:124] attempt to create docker network addons-703024 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0815 17:05:21.693311  385407 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-703024 addons-703024
	I0815 17:05:22.042022  385407 network_create.go:108] docker network addons-703024 192.168.49.0/24 created
	I0815 17:05:22.042054  385407 kic.go:121] calculated static IP "192.168.49.2" for the "addons-703024" container
	I0815 17:05:22.042126  385407 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0815 17:05:22.056896  385407 cli_runner.go:164] Run: docker volume create addons-703024 --label name.minikube.sigs.k8s.io=addons-703024 --label created_by.minikube.sigs.k8s.io=true
	I0815 17:05:22.158167  385407 oci.go:103] Successfully created a docker volume addons-703024
	I0815 17:05:22.158296  385407 cli_runner.go:164] Run: docker run --rm --name addons-703024-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703024 --entrypoint /usr/bin/test -v addons-703024:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0815 17:05:26.629800  385407 cli_runner.go:217] Completed: docker run --rm --name addons-703024-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703024 --entrypoint /usr/bin/test -v addons-703024:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib: (4.471459636s)
	I0815 17:05:26.629847  385407 oci.go:107] Successfully prepared a docker volume addons-703024
	I0815 17:05:26.629874  385407 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:05:26.629896  385407 kic.go:194] Starting extracting preloaded images to volume ...
	I0815 17:05:26.629956  385407 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-703024:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0815 17:05:30.949085  385407 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-703024:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (4.319078798s)
	I0815 17:05:30.949117  385407 kic.go:203] duration metric: took 4.319216387s to extract preloaded images to volume ...
	W0815 17:05:30.949237  385407 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0815 17:05:30.949365  385407 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0815 17:05:30.992011  385407 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-703024 --name addons-703024 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703024 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-703024 --network addons-703024 --ip 192.168.49.2 --volume addons-703024:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0815 17:05:31.278014  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Running}}
	I0815 17:05:31.295586  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:31.313998  385407 cli_runner.go:164] Run: docker exec addons-703024 stat /var/lib/dpkg/alternatives/iptables
	I0815 17:05:31.354108  385407 oci.go:144] the created container "addons-703024" has a running status.
	I0815 17:05:31.354144  385407 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa...
	I0815 17:05:31.438637  385407 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0815 17:05:31.459103  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:31.475486  385407 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0815 17:05:31.475513  385407 kic_runner.go:114] Args: [docker exec --privileged addons-703024 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0815 17:05:31.523622  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:31.539534  385407 machine.go:93] provisionDockerMachine start ...
	I0815 17:05:31.539628  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:31.557781  385407 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:31.558067  385407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:31.558092  385407 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:05:31.558766  385407 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60584->127.0.0.1:33138: read: connection reset by peer
	I0815 17:05:34.688086  385407 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-703024
	
	I0815 17:05:34.688119  385407 ubuntu.go:169] provisioning hostname "addons-703024"
	I0815 17:05:34.688176  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:34.704503  385407 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:34.704732  385407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:34.704753  385407 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-703024 && echo "addons-703024" | sudo tee /etc/hostname
	I0815 17:05:34.847139  385407 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-703024
	
	I0815 17:05:34.847216  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:34.863824  385407 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:34.864014  385407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:34.864032  385407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-703024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-703024/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-703024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:05:34.992496  385407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:05:34.992528  385407 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19450-377193/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-377193/.minikube}
	I0815 17:05:34.992597  385407 ubuntu.go:177] setting up certificates
	I0815 17:05:34.992613  385407 provision.go:84] configureAuth start
	I0815 17:05:34.992679  385407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703024
	I0815 17:05:35.008878  385407 provision.go:143] copyHostCerts
	I0815 17:05:35.008962  385407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem (1078 bytes)
	I0815 17:05:35.009145  385407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem (1123 bytes)
	I0815 17:05:35.009246  385407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem (1675 bytes)
	I0815 17:05:35.009330  385407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem org=jenkins.addons-703024 san=[127.0.0.1 192.168.49.2 addons-703024 localhost minikube]
	I0815 17:05:35.080384  385407 provision.go:177] copyRemoteCerts
	I0815 17:05:35.080450  385407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:05:35.080498  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.097041  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:35.192838  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:05:35.214352  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 17:05:35.235713  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:05:35.256396  385407 provision.go:87] duration metric: took 263.758764ms to configureAuth
	I0815 17:05:35.256434  385407 ubuntu.go:193] setting minikube options for container-runtime
	I0815 17:05:35.256648  385407 config.go:182] Loaded profile config "addons-703024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:05:35.256785  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.273282  385407 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:35.273466  385407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:35.273488  385407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:05:35.489141  385407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:05:35.489170  385407 machine.go:96] duration metric: took 3.949611229s to provisionDockerMachine
	I0815 17:05:35.489185  385407 client.go:171] duration metric: took 13.958847531s to LocalClient.Create
	I0815 17:05:35.489207  385407 start.go:167] duration metric: took 13.958924192s to libmachine.API.Create "addons-703024"
	I0815 17:05:35.489223  385407 start.go:293] postStartSetup for "addons-703024" (driver="docker")
	I0815 17:05:35.489239  385407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:05:35.489312  385407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:05:35.489364  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.505632  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:35.600949  385407 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:05:35.603743  385407 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 17:05:35.603771  385407 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 17:05:35.603779  385407 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 17:05:35.603787  385407 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 17:05:35.603798  385407 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/addons for local assets ...
	I0815 17:05:35.603852  385407 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/files for local assets ...
	I0815 17:05:35.603879  385407 start.go:296] duration metric: took 114.648796ms for postStartSetup
	I0815 17:05:35.604138  385407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703024
	I0815 17:05:35.620301  385407 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/config.json ...
	I0815 17:05:35.620569  385407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:05:35.620631  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.637047  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:35.725167  385407 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 17:05:35.729116  385407 start.go:128] duration metric: took 14.293097266s to createHost
	I0815 17:05:35.729138  385407 start.go:83] releasing machines lock for "addons-703024", held for 14.293248247s
	I0815 17:05:35.729201  385407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703024
	I0815 17:05:35.745135  385407 ssh_runner.go:195] Run: cat /version.json
	I0815 17:05:35.745178  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.745217  385407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:05:35.745290  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:35.762779  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:35.762953  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:35.924964  385407 ssh_runner.go:195] Run: systemctl --version
	I0815 17:05:35.928969  385407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:05:36.064297  385407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 17:05:36.068431  385407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:05:36.085549  385407 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 17:05:36.085624  385407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:05:36.110561  385407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0815 17:05:36.110595  385407 start.go:495] detecting cgroup driver to use...
	I0815 17:05:36.110632  385407 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 17:05:36.110703  385407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:05:36.124282  385407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:05:36.133697  385407 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:05:36.133756  385407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:05:36.145434  385407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:05:36.157661  385407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:05:36.232863  385407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:05:36.312986  385407 docker.go:233] disabling docker service ...
	I0815 17:05:36.313042  385407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:05:36.329581  385407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:05:36.339542  385407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:05:36.412453  385407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:05:36.489001  385407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:05:36.499108  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:05:36.513099  385407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:05:36.513154  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.521711  385407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:05:36.521776  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.530130  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.538275  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.546441  385407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:05:36.554023  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.562209  385407 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.575622  385407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:05:36.583882  385407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:05:36.591010  385407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:05:36.598095  385407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:05:36.671655  385407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:05:36.776793  385407 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:05:36.776873  385407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:05:36.780024  385407 start.go:563] Will wait 60s for crictl version
	I0815 17:05:36.780069  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:05:36.782824  385407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:05:36.815202  385407 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 17:05:36.815285  385407 ssh_runner.go:195] Run: crio --version
	I0815 17:05:36.851080  385407 ssh_runner.go:195] Run: crio --version
	I0815 17:05:36.887218  385407 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 17:05:36.888375  385407 cli_runner.go:164] Run: docker network inspect addons-703024 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:05:36.904036  385407 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 17:05:36.907383  385407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:05:36.917066  385407 kubeadm.go:883] updating cluster {Name:addons-703024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-703024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:05:36.917205  385407 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:05:36.917250  385407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:05:36.977292  385407 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:05:36.977315  385407 crio.go:433] Images already preloaded, skipping extraction
	I0815 17:05:36.977358  385407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:05:37.008155  385407 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:05:37.008178  385407 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:05:37.008186  385407 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0815 17:05:37.008296  385407 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-703024 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-703024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:05:37.008363  385407 ssh_runner.go:195] Run: crio config
	I0815 17:05:37.047478  385407 cni.go:84] Creating CNI manager for ""
	I0815 17:05:37.047496  385407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 17:05:37.047506  385407 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:05:37.047528  385407 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-703024 NodeName:addons-703024 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:05:37.047666  385407 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-703024"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:05:37.047725  385407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:05:37.055886  385407 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:05:37.055942  385407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 17:05:37.063534  385407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 17:05:37.078589  385407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:05:37.093782  385407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0815 17:05:37.109397  385407 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0815 17:05:37.112248  385407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:05:37.121333  385407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:05:37.196832  385407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:05:37.208591  385407 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024 for IP: 192.168.49.2
	I0815 17:05:37.208627  385407 certs.go:194] generating shared ca certs ...
	I0815 17:05:37.208649  385407 certs.go:226] acquiring lock for ca certs: {Name:mkf196aaefcb61003123eeb327e0f1a70bf4bfe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.208783  385407 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key
	I0815 17:05:37.263047  385407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt ...
	I0815 17:05:37.263078  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt: {Name:mk399af234c069e3ed75cc5132478ed5f424a637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.263232  385407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key ...
	I0815 17:05:37.263242  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key: {Name:mk7670345ad8e9e93de5e51cbe26f447c50a667f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.263312  385407 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key
	I0815 17:05:37.349644  385407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt ...
	I0815 17:05:37.349675  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt: {Name:mkb84e4ed90993f652fd97864a136f02e4db5580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.349849  385407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key ...
	I0815 17:05:37.349861  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key: {Name:mkd3a1fc36993b42851f4c114648a631c92b494d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.349932  385407 certs.go:256] generating profile certs ...
	I0815 17:05:37.349991  385407 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.key
	I0815 17:05:37.350006  385407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt with IP's: []
	I0815 17:05:37.836177  385407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt ...
	I0815 17:05:37.836212  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: {Name:mkd168136aba0e51c304406ace01a3841be06252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.836376  385407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.key ...
	I0815 17:05:37.836387  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.key: {Name:mk8263c3e99d11398fd40554bb2162bc05a08af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.836456  385407 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key.49a1a781
	I0815 17:05:37.836474  385407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt.49a1a781 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0815 17:05:37.969541  385407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt.49a1a781 ...
	I0815 17:05:37.969571  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt.49a1a781: {Name:mk86cb345bf5335803b3d8217df84c7d593c372a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.969734  385407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key.49a1a781 ...
	I0815 17:05:37.969748  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key.49a1a781: {Name:mk278b909fa90b694010d5b20a202adb7f1f7246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:37.969824  385407 certs.go:381] copying /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt.49a1a781 -> /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt
	I0815 17:05:37.969894  385407 certs.go:385] copying /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key.49a1a781 -> /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key
	I0815 17:05:37.969940  385407 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.key
	I0815 17:05:37.969957  385407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.crt with IP's: []
	I0815 17:05:38.137436  385407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.crt ...
	I0815 17:05:38.137468  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.crt: {Name:mk86d754f7b46fdf2d05689b8fe52bba57601036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:38.137626  385407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.key ...
	I0815 17:05:38.137639  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.key: {Name:mkda8d1a5d469f7adedc152e763b78617c8ff925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:38.137806  385407 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 17:05:38.137844  385407 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem (1078 bytes)
	I0815 17:05:38.137869  385407 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:05:38.137893  385407 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem (1675 bytes)
	I0815 17:05:38.138562  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:05:38.160252  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:05:38.180486  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:05:38.200523  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 17:05:38.220382  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 17:05:38.240194  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 17:05:38.260446  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:05:38.280684  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 17:05:38.300643  385407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:05:38.320898  385407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:05:38.335760  385407 ssh_runner.go:195] Run: openssl version
	I0815 17:05:38.340511  385407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:05:38.348445  385407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:05:38.351373  385407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:05:38.351425  385407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:05:38.357400  385407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:05:38.365013  385407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:05:38.367679  385407 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:05:38.367756  385407 kubeadm.go:392] StartCluster: {Name:addons-703024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-703024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:38.367840  385407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 17:05:38.367904  385407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:05:38.399527  385407 cri.go:89] found id: ""
	I0815 17:05:38.399605  385407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 17:05:38.407319  385407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 17:05:38.414900  385407 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0815 17:05:38.414962  385407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 17:05:38.422359  385407 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 17:05:38.422381  385407 kubeadm.go:157] found existing configuration files:
	
	I0815 17:05:38.422416  385407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 17:05:38.429626  385407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 17:05:38.429685  385407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 17:05:38.436592  385407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 17:05:38.443481  385407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 17:05:38.443535  385407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 17:05:38.450422  385407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 17:05:38.457607  385407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 17:05:38.457653  385407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 17:05:38.464584  385407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 17:05:38.471713  385407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 17:05:38.471754  385407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 17:05:38.478594  385407 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0815 17:05:38.512244  385407 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 17:05:38.512315  385407 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 17:05:38.530381  385407 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0815 17:05:38.530455  385407 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0815 17:05:38.530540  385407 kubeadm.go:310] OS: Linux
	I0815 17:05:38.530629  385407 kubeadm.go:310] CGROUPS_CPU: enabled
	I0815 17:05:38.530704  385407 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0815 17:05:38.530771  385407 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0815 17:05:38.530823  385407 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0815 17:05:38.530900  385407 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0815 17:05:38.530982  385407 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0815 17:05:38.531058  385407 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0815 17:05:38.531127  385407 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0815 17:05:38.531195  385407 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0815 17:05:38.580174  385407 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 17:05:38.580346  385407 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 17:05:38.580495  385407 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 17:05:38.586419  385407 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 17:05:38.589924  385407 out.go:235]   - Generating certificates and keys ...
	I0815 17:05:38.590015  385407 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 17:05:38.590071  385407 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 17:05:38.823342  385407 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 17:05:39.014648  385407 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 17:05:39.129731  385407 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 17:05:39.446496  385407 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 17:05:39.755320  385407 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 17:05:39.755471  385407 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-703024 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 17:05:39.966187  385407 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 17:05:39.966343  385407 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-703024 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 17:05:40.040875  385407 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 17:05:40.160458  385407 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 17:05:40.250890  385407 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 17:05:40.250966  385407 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 17:05:40.434931  385407 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 17:05:40.588956  385407 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 17:05:40.650170  385407 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 17:05:40.807576  385407 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 17:05:41.057971  385407 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 17:05:41.058417  385407 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 17:05:41.060795  385407 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 17:05:41.062961  385407 out.go:235]   - Booting up control plane ...
	I0815 17:05:41.063077  385407 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 17:05:41.063181  385407 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 17:05:41.063261  385407 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 17:05:41.072088  385407 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 17:05:41.077177  385407 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 17:05:41.077234  385407 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 17:05:41.149166  385407 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 17:05:41.149314  385407 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 17:05:42.150756  385407 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001583708s
	I0815 17:05:42.150857  385407 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 17:05:46.152997  385407 kubeadm.go:310] [api-check] The API server is healthy after 4.002260813s
	I0815 17:05:46.163644  385407 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 17:05:46.173987  385407 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 17:05:46.190920  385407 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 17:05:46.191162  385407 kubeadm.go:310] [mark-control-plane] Marking the node addons-703024 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 17:05:46.197933  385407 kubeadm.go:310] [bootstrap-token] Using token: krclci.kozi6o9ch4qso3c4
	I0815 17:05:46.199520  385407 out.go:235]   - Configuring RBAC rules ...
	I0815 17:05:46.199678  385407 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 17:05:46.202196  385407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 17:05:46.208004  385407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 17:05:46.210336  385407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 17:05:46.212437  385407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 17:05:46.216225  385407 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 17:05:46.559261  385407 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 17:05:46.980383  385407 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 17:05:47.558236  385407 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 17:05:47.559403  385407 kubeadm.go:310] 
	I0815 17:05:47.559488  385407 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 17:05:47.559498  385407 kubeadm.go:310] 
	I0815 17:05:47.559611  385407 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 17:05:47.559621  385407 kubeadm.go:310] 
	I0815 17:05:47.559668  385407 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 17:05:47.559753  385407 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 17:05:47.559820  385407 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 17:05:47.559829  385407 kubeadm.go:310] 
	I0815 17:05:47.559903  385407 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 17:05:47.559913  385407 kubeadm.go:310] 
	I0815 17:05:47.559971  385407 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 17:05:47.559981  385407 kubeadm.go:310] 
	I0815 17:05:47.560054  385407 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 17:05:47.560154  385407 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 17:05:47.560252  385407 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 17:05:47.560277  385407 kubeadm.go:310] 
	I0815 17:05:47.560398  385407 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 17:05:47.560522  385407 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 17:05:47.560532  385407 kubeadm.go:310] 
	I0815 17:05:47.560658  385407 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token krclci.kozi6o9ch4qso3c4 \
	I0815 17:05:47.560800  385407 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a342846b00061d7c3551c06e4f758c5edc3939c9da852e4d92590498b260c16a \
	I0815 17:05:47.560828  385407 kubeadm.go:310] 	--control-plane 
	I0815 17:05:47.560841  385407 kubeadm.go:310] 
	I0815 17:05:47.560962  385407 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 17:05:47.560974  385407 kubeadm.go:310] 
	I0815 17:05:47.561085  385407 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token krclci.kozi6o9ch4qso3c4 \
	I0815 17:05:47.561230  385407 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a342846b00061d7c3551c06e4f758c5edc3939c9da852e4d92590498b260c16a 
	I0815 17:05:47.563083  385407 kubeadm.go:310] W0815 17:05:38.509946    1298 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:05:47.563342  385407 kubeadm.go:310] W0815 17:05:38.510502    1298 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:05:47.563559  385407 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0815 17:05:47.563690  385407 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 17:05:47.563724  385407 cni.go:84] Creating CNI manager for ""
	I0815 17:05:47.563737  385407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 17:05:47.566230  385407 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 17:05:47.567449  385407 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 17:05:47.570965  385407 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 17:05:47.570980  385407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 17:05:47.586801  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 17:05:47.770652  385407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 17:05:47.770726  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:47.770766  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-703024 minikube.k8s.io/updated_at=2024_08_15T17_05_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=addons-703024 minikube.k8s.io/primary=true
	I0815 17:05:47.778317  385407 ops.go:34] apiserver oom_adj: -16
	I0815 17:05:47.862867  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:48.362934  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:48.863465  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:49.363857  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:49.863590  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:50.362898  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:50.863169  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:51.363185  385407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:05:51.424435  385407 kubeadm.go:1113] duration metric: took 3.653768566s to wait for elevateKubeSystemPrivileges
	I0815 17:05:51.424468  385407 kubeadm.go:394] duration metric: took 13.05672834s to StartCluster
	I0815 17:05:51.424485  385407 settings.go:142] acquiring lock: {Name:mke1aec41bab7354aae03597d79755a9c481f6a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:51.424619  385407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:05:51.424973  385407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/kubeconfig: {Name:mk661ec10a39902a1883ea9ee46c4be1d73fd858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:51.425140  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 17:05:51.425235  385407 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:05:51.425319  385407 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 17:05:51.425424  385407 addons.go:69] Setting yakd=true in profile "addons-703024"
	I0815 17:05:51.425441  385407 addons.go:69] Setting inspektor-gadget=true in profile "addons-703024"
	I0815 17:05:51.425465  385407 addons.go:234] Setting addon yakd=true in "addons-703024"
	I0815 17:05:51.425466  385407 addons.go:69] Setting ingress=true in profile "addons-703024"
	I0815 17:05:51.425483  385407 config.go:182] Loaded profile config "addons-703024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:05:51.425488  385407 addons.go:69] Setting ingress-dns=true in profile "addons-703024"
	I0815 17:05:51.425495  385407 addons.go:69] Setting helm-tiller=true in profile "addons-703024"
	I0815 17:05:51.425507  385407 addons.go:234] Setting addon ingress-dns=true in "addons-703024"
	I0815 17:05:51.425478  385407 addons.go:234] Setting addon inspektor-gadget=true in "addons-703024"
	I0815 17:05:51.425519  385407 addons.go:69] Setting metrics-server=true in profile "addons-703024"
	I0815 17:05:51.425529  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425533  385407 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-703024"
	I0815 17:05:51.425545  385407 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-703024"
	I0815 17:05:51.425555  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425556  385407 addons.go:69] Setting registry=true in profile "addons-703024"
	I0815 17:05:51.425566  385407 addons.go:69] Setting volumesnapshots=true in profile "addons-703024"
	I0815 17:05:51.425575  385407 addons.go:234] Setting addon registry=true in "addons-703024"
	I0815 17:05:51.425581  385407 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-703024"
	I0815 17:05:51.425588  385407 addons.go:234] Setting addon volumesnapshots=true in "addons-703024"
	I0815 17:05:51.425601  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425616  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425548  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425508  385407 addons.go:234] Setting addon ingress=true in "addons-703024"
	I0815 17:05:51.425738  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.425898  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.425556  385407 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-703024"
	I0815 17:05:51.426004  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.426063  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.426071  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.426079  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.426121  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.426150  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.425521  385407 addons.go:234] Setting addon helm-tiller=true in "addons-703024"
	I0815 17:05:51.426321  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.426776  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.425567  385407 addons.go:234] Setting addon metrics-server=true in "addons-703024"
	I0815 17:05:51.425485  385407 addons.go:69] Setting default-storageclass=true in profile "addons-703024"
	I0815 17:05:51.425535  385407 addons.go:69] Setting volcano=true in profile "addons-703024"
	I0815 17:05:51.425475  385407 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-703024"
	I0815 17:05:51.425487  385407 addons.go:69] Setting cloud-spanner=true in profile "addons-703024"
	I0815 17:05:51.425507  385407 addons.go:69] Setting storage-provisioner=true in profile "addons-703024"
	I0815 17:05:51.426066  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.427159  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.427241  385407 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-703024"
	I0815 17:05:51.427302  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.427372  385407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-703024"
	I0815 17:05:51.427421  385407 addons.go:234] Setting addon cloud-spanner=true in "addons-703024"
	I0815 17:05:51.428204  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.426894  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.428430  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.428744  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.428748  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.427511  385407 addons.go:234] Setting addon volcano=true in "addons-703024"
	I0815 17:05:51.429128  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.427562  385407 addons.go:234] Setting addon storage-provisioner=true in "addons-703024"
	I0815 17:05:51.429178  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.429560  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.429592  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.425454  385407 addons.go:69] Setting gcp-auth=true in profile "addons-703024"
	I0815 17:05:51.431391  385407 mustload.go:65] Loading cluster: addons-703024
	I0815 17:05:51.431596  385407 config.go:182] Loaded profile config "addons-703024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:05:51.431876  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.427912  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.427542  385407 out.go:177] * Verifying Kubernetes components...
	I0815 17:05:51.443755  385407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:05:51.467558  385407 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-703024"
	I0815 17:05:51.467612  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.468210  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.468414  385407 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 17:05:51.468531  385407 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 17:05:51.469979  385407 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 17:05:51.470043  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 17:05:51.471169  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.470081  385407 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 17:05:51.471350  385407 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 17:05:51.471392  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.471091  385407 addons.go:234] Setting addon default-storageclass=true in "addons-703024"
	I0815 17:05:51.472464  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.473084  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:51.474111  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 17:05:51.474148  385407 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 17:05:51.474179  385407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:05:51.476230  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 17:05:51.476253  385407 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 17:05:51.476314  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.476636  385407 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 17:05:51.476654  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 17:05:51.476698  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.480182  385407 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 17:05:51.481587  385407 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 17:05:51.481662  385407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 17:05:51.484255  385407 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 17:05:51.484276  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 17:05:51.484349  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.484871  385407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:05:51.486160  385407 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 17:05:51.486510  385407 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 17:05:51.486528  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 17:05:51.486592  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.494703  385407 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 17:05:51.494726  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 17:05:51.494786  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.497031  385407 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 17:05:51.501549  385407 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 17:05:51.501579  385407 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 17:05:51.501650  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.503283  385407 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0815 17:05:51.506367  385407 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0815 17:05:51.506394  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0815 17:05:51.506458  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.514649  385407 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 17:05:51.515872  385407 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 17:05:51.515893  385407 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 17:05:51.515972  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	W0815 17:05:51.528185  385407 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 17:05:51.529295  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.531106  385407 out.go:177]   - Using image docker.io/busybox:stable
	I0815 17:05:51.532383  385407 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 17:05:51.533619  385407 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 17:05:51.533639  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 17:05:51.533696  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.535188  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 17:05:51.536682  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 17:05:51.537964  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 17:05:51.539717  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:51.539817  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.541226  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 17:05:51.542562  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 17:05:51.543845  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 17:05:51.545121  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 17:05:51.545743  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.546843  385407 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 17:05:51.547168  385407 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 17:05:51.547225  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.548319  385407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 17:05:51.549825  385407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 17:05:51.549938  385407 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:05:51.549962  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 17:05:51.550019  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.551028  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.551329  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 17:05:51.551348  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 17:05:51.551398  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:51.551403  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.569267  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.570543  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.582473  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.584797  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.586357  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.597566  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.601773  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.603225  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:51.607246  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	W0815 17:05:51.653436  385407 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 17:05:51.653475  385407 retry.go:31] will retry after 305.49033ms: ssh: handshake failed: EOF
	I0815 17:05:51.658732  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 17:05:51.754633  385407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:05:51.856692  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 17:05:51.958573  385407 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 17:05:51.958598  385407 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 17:05:51.958604  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 17:05:52.061057  385407 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 17:05:52.061092  385407 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 17:05:52.065872  385407 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 17:05:52.065901  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 17:05:52.153051  385407 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0815 17:05:52.153148  385407 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0815 17:05:52.155099  385407 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 17:05:52.155181  385407 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 17:05:52.155130  385407 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 17:05:52.155273  385407 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 17:05:52.155961  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:05:52.161212  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 17:05:52.166461  385407 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 17:05:52.166496  385407 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 17:05:52.257320  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 17:05:52.259558  385407 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 17:05:52.259638  385407 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 17:05:52.265416  385407 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 17:05:52.265508  385407 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0815 17:05:52.273344  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 17:05:52.353531  385407 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 17:05:52.353635  385407 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 17:05:52.360446  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:05:52.367522  385407 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 17:05:52.367608  385407 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 17:05:52.453336  385407 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 17:05:52.453435  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 17:05:52.472914  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 17:05:52.557086  385407 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:05:52.557182  385407 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 17:05:52.565246  385407 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 17:05:52.565276  385407 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 17:05:52.572975  385407 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 17:05:52.573054  385407 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 17:05:52.654248  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 17:05:52.654331  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 17:05:52.854050  385407 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 17:05:52.854079  385407 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 17:05:52.954195  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:05:52.955706  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 17:05:52.955777  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 17:05:52.957817  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 17:05:52.963997  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 17:05:52.964025  385407 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 17:05:52.967980  385407 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 17:05:52.968002  385407 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 17:05:53.156760  385407 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:05:53.156787  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 17:05:53.255040  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 17:05:53.255072  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 17:05:53.263861  385407 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 17:05:53.263892  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 17:05:53.359760  385407 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.700985691s)
	I0815 17:05:53.359800  385407 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0815 17:05:53.361060  385407 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.606393204s)
	I0815 17:05:53.361896  385407 node_ready.go:35] waiting up to 6m0s for node "addons-703024" to be "Ready" ...
	I0815 17:05:53.362108  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.505384387s)
	I0815 17:05:53.362350  385407 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 17:05:53.362366  385407 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 17:05:53.458270  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:05:53.660175  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 17:05:53.756392  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 17:05:53.756481  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 17:05:53.759818  385407 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 17:05:53.759899  385407 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 17:05:53.964938  385407 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-703024" context rescaled to 1 replicas
	I0815 17:05:54.176828  385407 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 17:05:54.176911  385407 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 17:05:54.258921  385407 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 17:05:54.258997  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 17:05:54.370736  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 17:05:54.370821  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 17:05:54.566434  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 17:05:54.866051  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 17:05:54.866147  385407 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 17:05:55.256980  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 17:05:55.257074  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 17:05:55.272458  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.313811215s)
	I0815 17:05:55.272594  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.116604221s)
	I0815 17:05:55.373596  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:05:55.474717  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 17:05:55.474748  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 17:05:55.675263  385407 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 17:05:55.675293  385407 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 17:05:55.873052  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 17:05:57.464516  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.303212304s)
	I0815 17:05:57.464576  385407 addons.go:475] Verifying addon ingress=true in "addons-703024"
	I0815 17:05:57.464618  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.207196625s)
	I0815 17:05:57.464726  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.191301732s)
	I0815 17:05:57.464803  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.104274658s)
	I0815 17:05:57.464889  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.991890496s)
	I0815 17:05:57.464970  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.510743669s)
	I0815 17:05:57.464993  385407 addons.go:475] Verifying addon metrics-server=true in "addons-703024"
	I0815 17:05:57.465038  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.507194695s)
	I0815 17:05:57.465055  385407 addons.go:475] Verifying addon registry=true in "addons-703024"
	I0815 17:05:57.466452  385407 out.go:177] * Verifying ingress addon...
	I0815 17:05:57.467328  385407 out.go:177] * Verifying registry addon...
	I0815 17:05:57.468912  385407 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 17:05:57.469839  385407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 17:05:57.475723  385407 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 17:05:57.475782  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:57.475979  385407 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 17:05:57.476000  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:05:57.866039  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:05:57.973168  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:57.973804  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:05:58.474536  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:58.475219  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:05:58.488985  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.030604422s)
	W0815 17:05:58.489080  385407 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 17:05:58.489114  385407 retry.go:31] will retry after 259.352422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 17:05:58.489117  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.828858761s)
	I0815 17:05:58.489209  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.922670339s)
	I0815 17:05:58.490791  385407 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-703024 service yakd-dashboard -n yakd-dashboard
	
	I0815 17:05:58.748823  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:05:58.758298  385407 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 17:05:58.758369  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:58.780391  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:58.974027  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:58.975213  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:05:59.074453  385407 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 17:05:59.153987  385407 addons.go:234] Setting addon gcp-auth=true in "addons-703024"
	I0815 17:05:59.154088  385407 host.go:66] Checking if "addons-703024" exists ...
	I0815 17:05:59.154659  385407 cli_runner.go:164] Run: docker container inspect addons-703024 --format={{.State.Status}}
	I0815 17:05:59.180754  385407 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 17:05:59.180811  385407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703024
	I0815 17:05:59.200076  385407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/addons-703024/id_rsa Username:docker}
	I0815 17:05:59.372825  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.499647938s)
	I0815 17:05:59.372868  385407 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-703024"
	I0815 17:05:59.374358  385407 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 17:05:59.376913  385407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 17:05:59.379446  385407 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 17:05:59.379468  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:05:59.473084  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:05:59.473558  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:59.880914  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:05:59.972352  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:05:59.973090  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:00.364910  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:06:00.380684  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:00.472629  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:00.472830  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:00.880230  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:00.972312  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:00.972377  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:01.380819  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:01.474597  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:01.474963  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:01.881716  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:01.972692  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:01.972900  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:02.203932  385407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.45504696s)
	I0815 17:06:02.203975  385407 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.023189594s)
	I0815 17:06:02.206164  385407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:06:02.207669  385407 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 17:06:02.208956  385407 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 17:06:02.208975  385407 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 17:06:02.255150  385407 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 17:06:02.255182  385407 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 17:06:02.273229  385407 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 17:06:02.273258  385407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 17:06:02.290404  385407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 17:06:02.365310  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:06:02.380048  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:02.472586  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:02.473137  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:02.868616  385407 addons.go:475] Verifying addon gcp-auth=true in "addons-703024"
	I0815 17:06:02.870658  385407 out.go:177] * Verifying gcp-auth addon...
	I0815 17:06:02.873177  385407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 17:06:02.877483  385407 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 17:06:02.877504  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:02.879343  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:02.972755  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:02.973176  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:03.376386  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:03.379610  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:03.472375  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:03.472838  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:03.876697  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:03.879839  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:03.972881  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:03.973038  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:04.365376  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:06:04.376741  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:04.379909  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:04.472968  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:04.472980  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:04.876537  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:04.879687  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:04.972794  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:04.972964  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:05.376933  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:05.379570  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:05.472710  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:05.472782  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:05.876423  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:05.879388  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:05.972455  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:05.972483  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:06.376271  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:06.379378  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:06.472409  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:06.472515  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:06.865250  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:06:06.877094  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:06.879207  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:06.972508  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:06.972901  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:07.375801  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:07.379993  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:07.472732  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:07.473326  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:07.877194  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:07.879399  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:07.972523  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:07.972533  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:08.376566  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:08.379395  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:08.472436  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:08.472498  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:08.865468  385407 node_ready.go:53] node "addons-703024" has status "Ready":"False"
	I0815 17:06:08.877937  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:08.879370  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:08.972172  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:08.972520  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:09.376491  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:09.379384  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:09.472348  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:09.472391  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:09.876354  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:09.879173  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:09.972670  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:09.973072  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:10.377768  385407 node_ready.go:49] node "addons-703024" has status "Ready":"True"
	I0815 17:06:10.377796  385407 node_ready.go:38] duration metric: took 17.015874306s for node "addons-703024" to be "Ready" ...
	I0815 17:06:10.377809  385407 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:06:10.378313  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:10.379194  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:10.461861  385407 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qkxj6" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:10.476140  385407 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 17:06:10.476167  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:10.476299  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:10.876640  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:10.880522  385407 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 17:06:10.880542  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:10.977230  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:10.977531  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:11.381698  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:11.381991  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:11.483169  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:11.483232  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:11.876995  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:11.880418  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:11.972806  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:11.973624  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:12.377303  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:12.380816  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:12.468162  385407 pod_ready.go:93] pod "coredns-6f6b679f8f-qkxj6" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.468190  385407 pod_ready.go:82] duration metric: took 2.006298074s for pod "coredns-6f6b679f8f-qkxj6" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.468214  385407 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.472443  385407 pod_ready.go:93] pod "etcd-addons-703024" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.472465  385407 pod_ready.go:82] duration metric: took 4.243953ms for pod "etcd-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.472476  385407 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.472999  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:12.473316  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:12.476235  385407 pod_ready.go:93] pod "kube-apiserver-addons-703024" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.476254  385407 pod_ready.go:82] duration metric: took 3.770872ms for pod "kube-apiserver-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.476265  385407 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.479893  385407 pod_ready.go:93] pod "kube-controller-manager-addons-703024" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.479909  385407 pod_ready.go:82] duration metric: took 3.637464ms for pod "kube-controller-manager-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.479919  385407 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nsvg6" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.483565  385407 pod_ready.go:93] pod "kube-proxy-nsvg6" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.483581  385407 pod_ready.go:82] duration metric: took 3.657002ms for pod "kube-proxy-nsvg6" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.483589  385407 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.866512  385407 pod_ready.go:93] pod "kube-scheduler-addons-703024" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:12.866533  385407 pod_ready.go:82] duration metric: took 382.938072ms for pod "kube-scheduler-addons-703024" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.866543  385407 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:12.876077  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:12.880688  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:12.973021  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:12.973195  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:13.377170  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:13.380489  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:13.472895  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:13.473315  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:13.876871  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:13.880455  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:13.972765  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:13.973309  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:14.376073  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:14.381145  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:14.472588  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:14.473017  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:14.873343  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:14.875842  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:14.880760  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:14.973106  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:14.973496  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:15.376211  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:15.381545  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:15.473921  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:15.474138  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:15.876045  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:15.881444  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:15.972824  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:15.972969  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:16.376000  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:16.380534  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:16.472830  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:16.473127  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:16.876764  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:16.881577  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:16.973181  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:16.973479  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:17.373018  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:17.376442  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:17.380904  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:17.473184  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:17.473413  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:17.875778  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:17.881218  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:17.973262  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:17.974224  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:18.376112  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:18.381056  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:18.477575  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:18.478004  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:18.876525  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:18.880573  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:18.972949  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:18.973222  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:19.376296  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:19.377598  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:19.381723  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:19.478360  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:19.478561  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:19.877172  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:19.880691  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:19.974224  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:19.974665  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:20.376815  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:20.455397  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:20.474960  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:20.475553  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:20.876368  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:20.882113  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:20.973570  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:20.974453  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:21.376121  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:21.381279  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:21.472914  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:21.472914  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:21.872111  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:21.877208  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:21.880448  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:21.973777  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:21.974499  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:22.375749  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:22.380427  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:22.472870  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:22.473029  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:22.876411  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:22.880361  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:22.973508  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:22.974991  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:23.375988  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:23.381204  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:23.473406  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:23.473545  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:23.877008  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:23.880043  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:23.977919  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:23.978279  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:24.371596  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:24.376096  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:24.380642  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:24.477603  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:24.477956  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:24.876504  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:24.881403  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:24.973373  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:24.973688  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:25.375859  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:25.381150  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:25.473703  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:25.474139  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:25.876519  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:25.880343  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:25.973092  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:25.973169  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:26.372183  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:26.376592  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:26.380393  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:26.473091  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:26.473471  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:26.875870  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:26.880899  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:26.973644  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:26.974702  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:27.376368  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:27.379825  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:27.477618  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:27.477889  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:27.875708  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:27.880876  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:27.976164  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:27.976637  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:28.372718  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:28.376619  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:28.380861  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:28.473718  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:28.474408  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:28.875703  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:28.880512  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:28.973104  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:28.973505  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:29.376268  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:29.380295  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:29.477602  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:29.478160  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:29.876408  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:29.880356  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:29.972697  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:29.973272  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:30.375755  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:30.382948  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:30.473095  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:30.473547  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:30.872754  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:30.877129  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:30.880010  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:30.977813  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:30.978122  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:31.375523  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:31.380086  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:31.473125  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:31.473131  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:31.876940  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:31.880836  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:31.973265  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:31.973483  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:32.376093  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:32.381571  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:32.473056  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:32.473087  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:32.876432  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:32.880824  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:32.973615  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:32.974365  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:33.373006  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:33.375721  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:33.380669  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:33.473066  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:33.473863  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:33.876419  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:33.880377  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:33.972771  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:33.973200  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:34.376359  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:34.380997  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:34.473295  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:34.473427  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:34.876918  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:34.881033  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:34.976601  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:34.977833  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:35.376200  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:35.381259  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:35.472993  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:35.473108  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:35.872611  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:35.875739  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:35.880831  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:35.972929  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:35.973399  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:36.376376  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:36.380369  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:36.473001  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:36.473368  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:36.875782  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:36.881903  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:36.972601  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:36.972856  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:37.376124  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:37.380737  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:37.473138  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:37.473411  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:37.872748  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:37.875948  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:37.881168  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:37.973343  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:37.973933  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:38.375829  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:38.380736  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:38.473039  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:38.473137  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:38.875442  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:38.880020  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:38.972763  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:38.972906  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:39.376281  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:39.379870  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:39.473059  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:39.473451  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:39.875820  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:39.880480  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:39.973121  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:39.973518  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:40.372534  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:40.376563  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:40.381685  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:40.473702  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:40.474161  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:40.962764  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:40.964332  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:40.978379  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:40.978719  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:41.377406  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:41.382217  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:41.473262  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:41.474226  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:41.876216  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:41.881830  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:41.973151  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:41.973567  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:42.372665  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:42.375914  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:42.381210  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:42.473121  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:42.473325  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:42.876328  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:42.881734  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:42.973397  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:42.974314  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:43.376321  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:43.381572  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:43.473106  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:43.473518  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:43.875601  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:43.880620  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:43.973590  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:43.973783  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:44.372874  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:44.375882  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:44.380802  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:44.473010  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:44.473094  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:44.876904  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:44.880920  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:44.972732  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:44.973066  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:45.376047  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:45.380937  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:45.473606  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:45.473818  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:45.876387  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:45.880172  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:45.973236  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:45.973299  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:46.376153  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:46.381160  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:46.472853  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:46.473599  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:46.872714  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:46.875812  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:46.881934  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:46.972948  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:46.973367  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:47.376247  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:47.381974  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:47.472982  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:47.473168  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:47.876742  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:47.881032  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:47.973087  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:47.973259  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:48.375987  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:48.380917  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:48.472648  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:48.472849  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:48.876836  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:48.880430  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:48.972776  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:48.973108  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:49.371890  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:49.376142  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:49.381131  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:49.472932  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:49.473088  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:49.875912  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:49.880633  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:49.973010  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:49.973590  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:50.376375  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:50.381864  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:50.473650  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:50.473681  385407 kapi.go:107] duration metric: took 53.003842114s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 17:06:50.876364  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:50.880037  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:50.972431  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:51.372064  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:51.376038  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:51.380816  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:51.473234  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:51.955476  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:51.967619  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:51.975809  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:52.457240  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:52.458091  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:52.473821  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:52.876467  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:52.880722  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:52.974148  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:53.374234  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:53.376308  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:53.456778  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:53.473557  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:53.876114  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:53.881491  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:53.973301  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:54.376408  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:54.383219  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:54.483324  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:54.901994  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:54.902767  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:54.973085  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:55.376242  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:55.381312  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:55.472763  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:55.872886  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:55.876240  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:55.880857  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:55.972541  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:56.376413  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:56.380517  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:56.473164  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:56.877692  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:56.880514  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:56.973136  385407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:57.376234  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:57.381638  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:57.473048  385407 kapi.go:107] duration metric: took 1m0.004133248s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 17:06:57.876235  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:57.881582  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:58.371527  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:58.375729  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:06:58.380484  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:58.875923  385407 kapi.go:107] duration metric: took 56.002743268s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 17:06:58.877442  385407 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-703024 cluster.
	I0815 17:06:58.880208  385407 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 17:06:58.881222  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:58.882895  385407 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 17:06:59.380848  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:59.880949  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:00.468423  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:00.469536  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:00.881064  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:01.381421  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:01.881598  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:02.381491  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:02.872405  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:02.880370  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:03.381274  385407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:03.880433  385407 kapi.go:107] duration metric: took 1m4.503523017s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 17:07:03.882151  385407 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, ingress-dns, storage-provisioner, helm-tiller, metrics-server, storage-provisioner-rancher, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0815 17:07:03.883336  385407 addons.go:510] duration metric: took 1m12.458019521s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass ingress-dns storage-provisioner helm-tiller metrics-server storage-provisioner-rancher inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0815 17:07:05.372078  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:07.372821  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:09.873087  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:12.372099  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:14.873691  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:17.371763  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:19.871985  385407 pod_ready.go:103] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:21.371978  385407 pod_ready.go:93] pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:21.372006  385407 pod_ready.go:82] duration metric: took 1m8.505451068s for pod "metrics-server-8988944d9-flc8s" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:21.372018  385407 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xqk8k" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:21.375919  385407 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xqk8k" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:21.375939  385407 pod_ready.go:82] duration metric: took 3.912854ms for pod "nvidia-device-plugin-daemonset-xqk8k" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:21.375957  385407 pod_ready.go:39] duration metric: took 1m10.99813416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:07:21.375979  385407 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:07:21.376008  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:07:21.376062  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:07:21.410124  385407 cri.go:89] found id: "3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:21.410152  385407 cri.go:89] found id: ""
	I0815 17:07:21.410163  385407 logs.go:276] 1 containers: [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc]
	I0815 17:07:21.410217  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.413337  385407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 17:07:21.413389  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:07:21.445971  385407 cri.go:89] found id: "3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:21.446000  385407 cri.go:89] found id: ""
	I0815 17:07:21.446010  385407 logs.go:276] 1 containers: [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03]
	I0815 17:07:21.446066  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.449694  385407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 17:07:21.449753  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:07:21.484182  385407 cri.go:89] found id: "e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:21.484208  385407 cri.go:89] found id: ""
	I0815 17:07:21.484218  385407 logs.go:276] 1 containers: [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107]
	I0815 17:07:21.484271  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.487560  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:07:21.487613  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:07:21.520298  385407 cri.go:89] found id: "ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:21.520322  385407 cri.go:89] found id: ""
	I0815 17:07:21.520330  385407 logs.go:276] 1 containers: [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627]
	I0815 17:07:21.520380  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.523524  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:07:21.523591  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:07:21.556415  385407 cri.go:89] found id: "a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:21.556437  385407 cri.go:89] found id: ""
	I0815 17:07:21.556446  385407 logs.go:276] 1 containers: [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c]
	I0815 17:07:21.556489  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.559643  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:07:21.559696  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:07:21.594625  385407 cri.go:89] found id: "71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:21.594647  385407 cri.go:89] found id: ""
	I0815 17:07:21.594655  385407 logs.go:276] 1 containers: [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f]
	I0815 17:07:21.594706  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.598115  385407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 17:07:21.598181  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:07:21.630686  385407 cri.go:89] found id: "3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:21.630708  385407 cri.go:89] found id: ""
	I0815 17:07:21.630716  385407 logs.go:276] 1 containers: [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358]
	I0815 17:07:21.630757  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:21.633874  385407 logs.go:123] Gathering logs for etcd [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03] ...
	I0815 17:07:21.633896  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:21.683490  385407 logs.go:123] Gathering logs for coredns [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107] ...
	I0815 17:07:21.683521  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:21.717823  385407 logs.go:123] Gathering logs for kube-proxy [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c] ...
	I0815 17:07:21.717850  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:21.749631  385407 logs.go:123] Gathering logs for kube-controller-manager [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f] ...
	I0815 17:07:21.749657  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:21.806137  385407 logs.go:123] Gathering logs for kindnet [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358] ...
	I0815 17:07:21.806171  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:21.843758  385407 logs.go:123] Gathering logs for dmesg ...
	I0815 17:07:21.843785  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:07:21.869010  385407 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:07:21.869042  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:07:21.965521  385407 logs.go:123] Gathering logs for kube-scheduler [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627] ...
	I0815 17:07:21.965552  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:22.007157  385407 logs.go:123] Gathering logs for CRI-O ...
	I0815 17:07:22.007190  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 17:07:22.086492  385407 logs.go:123] Gathering logs for container status ...
	I0815 17:07:22.086531  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:07:22.126668  385407 logs.go:123] Gathering logs for kubelet ...
	I0815 17:07:22.126733  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:07:22.190335  385407 logs.go:123] Gathering logs for kube-apiserver [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc] ...
	I0815 17:07:22.190372  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:24.734652  385407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:07:24.748234  385407 api_server.go:72] duration metric: took 1m33.322956981s to wait for apiserver process to appear ...
	I0815 17:07:24.748258  385407 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:07:24.748301  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:07:24.748351  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:07:24.780350  385407 cri.go:89] found id: "3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:24.780376  385407 cri.go:89] found id: ""
	I0815 17:07:24.780388  385407 logs.go:276] 1 containers: [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc]
	I0815 17:07:24.780441  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.783624  385407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 17:07:24.783696  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:07:24.815446  385407 cri.go:89] found id: "3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:24.815466  385407 cri.go:89] found id: ""
	I0815 17:07:24.815476  385407 logs.go:276] 1 containers: [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03]
	I0815 17:07:24.815527  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.818638  385407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 17:07:24.818704  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:07:24.851543  385407 cri.go:89] found id: "e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:24.851562  385407 cri.go:89] found id: ""
	I0815 17:07:24.851576  385407 logs.go:276] 1 containers: [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107]
	I0815 17:07:24.851633  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.854745  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:07:24.854799  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:07:24.886958  385407 cri.go:89] found id: "ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:24.886982  385407 cri.go:89] found id: ""
	I0815 17:07:24.886992  385407 logs.go:276] 1 containers: [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627]
	I0815 17:07:24.887043  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.890269  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:07:24.890320  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:07:24.923133  385407 cri.go:89] found id: "a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:24.923154  385407 cri.go:89] found id: ""
	I0815 17:07:24.923162  385407 logs.go:276] 1 containers: [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c]
	I0815 17:07:24.923207  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.926544  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:07:24.926614  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:07:24.958401  385407 cri.go:89] found id: "71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:24.958425  385407 cri.go:89] found id: ""
	I0815 17:07:24.958435  385407 logs.go:276] 1 containers: [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f]
	I0815 17:07:24.958487  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.961717  385407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 17:07:24.961772  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:07:24.994751  385407 cri.go:89] found id: "3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:24.994771  385407 cri.go:89] found id: ""
	I0815 17:07:24.994778  385407 logs.go:276] 1 containers: [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358]
	I0815 17:07:24.994819  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:24.998255  385407 logs.go:123] Gathering logs for kubelet ...
	I0815 17:07:24.998278  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:07:25.053616  385407 logs.go:123] Gathering logs for dmesg ...
	I0815 17:07:25.053649  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:07:25.077668  385407 logs.go:123] Gathering logs for kube-apiserver [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc] ...
	I0815 17:07:25.077696  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:25.119884  385407 logs.go:123] Gathering logs for kube-controller-manager [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f] ...
	I0815 17:07:25.119914  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:25.177731  385407 logs.go:123] Gathering logs for kindnet [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358] ...
	I0815 17:07:25.177767  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:25.215502  385407 logs.go:123] Gathering logs for CRI-O ...
	I0815 17:07:25.215532  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 17:07:25.291742  385407 logs.go:123] Gathering logs for container status ...
	I0815 17:07:25.291780  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:07:25.332657  385407 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:07:25.332688  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:07:25.430198  385407 logs.go:123] Gathering logs for etcd [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03] ...
	I0815 17:07:25.430231  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:25.480647  385407 logs.go:123] Gathering logs for coredns [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107] ...
	I0815 17:07:25.480678  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:25.517396  385407 logs.go:123] Gathering logs for kube-scheduler [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627] ...
	I0815 17:07:25.517423  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:25.556595  385407 logs.go:123] Gathering logs for kube-proxy [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c] ...
	I0815 17:07:25.556623  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:28.089700  385407 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 17:07:28.094210  385407 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 17:07:28.095164  385407 api_server.go:141] control plane version: v1.31.0
	I0815 17:07:28.095188  385407 api_server.go:131] duration metric: took 3.346922594s to wait for apiserver health ...
	I0815 17:07:28.095196  385407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:07:28.095217  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:07:28.095267  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:07:28.128374  385407 cri.go:89] found id: "3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:28.128394  385407 cri.go:89] found id: ""
	I0815 17:07:28.128402  385407 logs.go:276] 1 containers: [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc]
	I0815 17:07:28.128447  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.131712  385407 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 17:07:28.131760  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:07:28.164423  385407 cri.go:89] found id: "3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:28.164444  385407 cri.go:89] found id: ""
	I0815 17:07:28.164452  385407 logs.go:276] 1 containers: [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03]
	I0815 17:07:28.164499  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.167667  385407 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 17:07:28.167736  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:07:28.201035  385407 cri.go:89] found id: "e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:28.201055  385407 cri.go:89] found id: ""
	I0815 17:07:28.201062  385407 logs.go:276] 1 containers: [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107]
	I0815 17:07:28.201116  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.204306  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:07:28.204367  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:07:28.238322  385407 cri.go:89] found id: "ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:28.238350  385407 cri.go:89] found id: ""
	I0815 17:07:28.238361  385407 logs.go:276] 1 containers: [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627]
	I0815 17:07:28.238421  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.241906  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:07:28.241961  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:07:28.277044  385407 cri.go:89] found id: "a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:28.277069  385407 cri.go:89] found id: ""
	I0815 17:07:28.277080  385407 logs.go:276] 1 containers: [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c]
	I0815 17:07:28.277140  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.280430  385407 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:07:28.280484  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:07:28.313924  385407 cri.go:89] found id: "71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:28.313947  385407 cri.go:89] found id: ""
	I0815 17:07:28.313955  385407 logs.go:276] 1 containers: [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f]
	I0815 17:07:28.314000  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.317333  385407 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 17:07:28.317388  385407 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:07:28.350499  385407 cri.go:89] found id: "3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:28.350528  385407 cri.go:89] found id: ""
	I0815 17:07:28.350537  385407 logs.go:276] 1 containers: [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358]
	I0815 17:07:28.350592  385407 ssh_runner.go:195] Run: which crictl
	I0815 17:07:28.354018  385407 logs.go:123] Gathering logs for kindnet [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358] ...
	I0815 17:07:28.354043  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358"
	I0815 17:07:28.392887  385407 logs.go:123] Gathering logs for CRI-O ...
	I0815 17:07:28.392918  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 17:07:28.464780  385407 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:07:28.464819  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:07:28.564141  385407 logs.go:123] Gathering logs for kube-apiserver [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc] ...
	I0815 17:07:28.564173  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc"
	I0815 17:07:28.608701  385407 logs.go:123] Gathering logs for coredns [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107] ...
	I0815 17:07:28.608733  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107"
	I0815 17:07:28.644976  385407 logs.go:123] Gathering logs for kube-proxy [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c] ...
	I0815 17:07:28.645006  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c"
	I0815 17:07:28.678355  385407 logs.go:123] Gathering logs for kube-controller-manager [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f] ...
	I0815 17:07:28.678386  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f"
	I0815 17:07:28.735603  385407 logs.go:123] Gathering logs for kubelet ...
	I0815 17:07:28.735639  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:07:28.789160  385407 logs.go:123] Gathering logs for dmesg ...
	I0815 17:07:28.789196  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:07:28.813650  385407 logs.go:123] Gathering logs for etcd [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03] ...
	I0815 17:07:28.813685  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03"
	I0815 17:07:28.862968  385407 logs.go:123] Gathering logs for kube-scheduler [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627] ...
	I0815 17:07:28.863002  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627"
	I0815 17:07:28.906849  385407 logs.go:123] Gathering logs for container status ...
	I0815 17:07:28.906891  385407 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:07:31.458443  385407 system_pods.go:59] 19 kube-system pods found
	I0815 17:07:31.458475  385407 system_pods.go:61] "coredns-6f6b679f8f-qkxj6" [34ae48c8-3d7b-4a77-8b13-13b8b10756f5] Running
	I0815 17:07:31.458480  385407 system_pods.go:61] "csi-hostpath-attacher-0" [7946b78c-985f-4cda-96a1-5c49966406a5] Running
	I0815 17:07:31.458484  385407 system_pods.go:61] "csi-hostpath-resizer-0" [82257fb0-be7a-4b13-9923-f696e123c103] Running
	I0815 17:07:31.458488  385407 system_pods.go:61] "csi-hostpathplugin-swhv8" [9149a811-b352-498e-805f-5de2e5a5a3ef] Running
	I0815 17:07:31.458491  385407 system_pods.go:61] "etcd-addons-703024" [c09918ca-f68f-4983-87a3-735fea26a55d] Running
	I0815 17:07:31.458496  385407 system_pods.go:61] "kindnet-c9vlm" [d5ebec8a-692a-46ac-aa63-8f88014adda2] Running
	I0815 17:07:31.458499  385407 system_pods.go:61] "kube-apiserver-addons-703024" [99caa053-eb58-456d-b8d5-a077317fb464] Running
	I0815 17:07:31.458503  385407 system_pods.go:61] "kube-controller-manager-addons-703024" [a7dc1511-bbdc-4663-ac6c-4b1e8b99087c] Running
	I0815 17:07:31.458506  385407 system_pods.go:61] "kube-ingress-dns-minikube" [e819b06b-0df3-45f9-a0de-807192f6978e] Running
	I0815 17:07:31.458509  385407 system_pods.go:61] "kube-proxy-nsvg6" [c5cafc62-f92a-4bee-a21e-ea2d555797e6] Running
	I0815 17:07:31.458512  385407 system_pods.go:61] "kube-scheduler-addons-703024" [73d8fa2f-1f2e-4d51-bcf7-bc3fa746cb84] Running
	I0815 17:07:31.458518  385407 system_pods.go:61] "metrics-server-8988944d9-flc8s" [1b94ea1a-e1d1-45d5-ba12-31457ddd2aab] Running
	I0815 17:07:31.458521  385407 system_pods.go:61] "nvidia-device-plugin-daemonset-xqk8k" [dd6bbf51-8737-4c2c-9596-00154e1ec52d] Running
	I0815 17:07:31.458525  385407 system_pods.go:61] "registry-6fb4cdfc84-jnqvt" [2df2b6d1-e4e8-4d1b-962b-574054625724] Running
	I0815 17:07:31.458528  385407 system_pods.go:61] "registry-proxy-4xk99" [7672bca9-2613-4a51-b743-107bdc30df7b] Running
	I0815 17:07:31.458533  385407 system_pods.go:61] "snapshot-controller-56fcc65765-5xmtm" [035efc15-66e6-4699-b4b2-f00adcaa95eb] Running
	I0815 17:07:31.458536  385407 system_pods.go:61] "snapshot-controller-56fcc65765-gqldd" [727e7a8f-5da3-4a26-b4bf-58402e345986] Running
	I0815 17:07:31.458542  385407 system_pods.go:61] "storage-provisioner" [dc5596da-d005-4633-893c-382dd8f2e28e] Running
	I0815 17:07:31.458545  385407 system_pods.go:61] "tiller-deploy-b48cc5f79-twgzw" [f0d20030-3d71-47ce-9f44-cf4f462d6c84] Running
	I0815 17:07:31.458551  385407 system_pods.go:74] duration metric: took 3.363349819s to wait for pod list to return data ...
	I0815 17:07:31.458563  385407 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:07:31.460776  385407 default_sa.go:45] found service account: "default"
	I0815 17:07:31.460797  385407 default_sa.go:55] duration metric: took 2.226514ms for default service account to be created ...
	I0815 17:07:31.460807  385407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:07:31.469667  385407 system_pods.go:86] 19 kube-system pods found
	I0815 17:07:31.469691  385407 system_pods.go:89] "coredns-6f6b679f8f-qkxj6" [34ae48c8-3d7b-4a77-8b13-13b8b10756f5] Running
	I0815 17:07:31.469696  385407 system_pods.go:89] "csi-hostpath-attacher-0" [7946b78c-985f-4cda-96a1-5c49966406a5] Running
	I0815 17:07:31.469700  385407 system_pods.go:89] "csi-hostpath-resizer-0" [82257fb0-be7a-4b13-9923-f696e123c103] Running
	I0815 17:07:31.469704  385407 system_pods.go:89] "csi-hostpathplugin-swhv8" [9149a811-b352-498e-805f-5de2e5a5a3ef] Running
	I0815 17:07:31.469707  385407 system_pods.go:89] "etcd-addons-703024" [c09918ca-f68f-4983-87a3-735fea26a55d] Running
	I0815 17:07:31.469712  385407 system_pods.go:89] "kindnet-c9vlm" [d5ebec8a-692a-46ac-aa63-8f88014adda2] Running
	I0815 17:07:31.469715  385407 system_pods.go:89] "kube-apiserver-addons-703024" [99caa053-eb58-456d-b8d5-a077317fb464] Running
	I0815 17:07:31.469719  385407 system_pods.go:89] "kube-controller-manager-addons-703024" [a7dc1511-bbdc-4663-ac6c-4b1e8b99087c] Running
	I0815 17:07:31.469724  385407 system_pods.go:89] "kube-ingress-dns-minikube" [e819b06b-0df3-45f9-a0de-807192f6978e] Running
	I0815 17:07:31.469727  385407 system_pods.go:89] "kube-proxy-nsvg6" [c5cafc62-f92a-4bee-a21e-ea2d555797e6] Running
	I0815 17:07:31.469733  385407 system_pods.go:89] "kube-scheduler-addons-703024" [73d8fa2f-1f2e-4d51-bcf7-bc3fa746cb84] Running
	I0815 17:07:31.469736  385407 system_pods.go:89] "metrics-server-8988944d9-flc8s" [1b94ea1a-e1d1-45d5-ba12-31457ddd2aab] Running
	I0815 17:07:31.469742  385407 system_pods.go:89] "nvidia-device-plugin-daemonset-xqk8k" [dd6bbf51-8737-4c2c-9596-00154e1ec52d] Running
	I0815 17:07:31.469746  385407 system_pods.go:89] "registry-6fb4cdfc84-jnqvt" [2df2b6d1-e4e8-4d1b-962b-574054625724] Running
	I0815 17:07:31.469751  385407 system_pods.go:89] "registry-proxy-4xk99" [7672bca9-2613-4a51-b743-107bdc30df7b] Running
	I0815 17:07:31.469755  385407 system_pods.go:89] "snapshot-controller-56fcc65765-5xmtm" [035efc15-66e6-4699-b4b2-f00adcaa95eb] Running
	I0815 17:07:31.469760  385407 system_pods.go:89] "snapshot-controller-56fcc65765-gqldd" [727e7a8f-5da3-4a26-b4bf-58402e345986] Running
	I0815 17:07:31.469766  385407 system_pods.go:89] "storage-provisioner" [dc5596da-d005-4633-893c-382dd8f2e28e] Running
	I0815 17:07:31.469772  385407 system_pods.go:89] "tiller-deploy-b48cc5f79-twgzw" [f0d20030-3d71-47ce-9f44-cf4f462d6c84] Running
	I0815 17:07:31.469779  385407 system_pods.go:126] duration metric: took 8.965785ms to wait for k8s-apps to be running ...
	I0815 17:07:31.469786  385407 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:07:31.469835  385407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:07:31.481045  385407 system_svc.go:56] duration metric: took 11.252737ms WaitForService to wait for kubelet
	I0815 17:07:31.481070  385407 kubeadm.go:582] duration metric: took 1m40.05579674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:07:31.481091  385407 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:07:31.483938  385407 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:07:31.483967  385407 node_conditions.go:123] node cpu capacity is 8
	I0815 17:07:31.483984  385407 node_conditions.go:105] duration metric: took 2.886832ms to run NodePressure ...
	I0815 17:07:31.483997  385407 start.go:241] waiting for startup goroutines ...
	I0815 17:07:31.484011  385407 start.go:246] waiting for cluster config update ...
	I0815 17:07:31.484035  385407 start.go:255] writing updated cluster config ...
	I0815 17:07:31.484348  385407 ssh_runner.go:195] Run: rm -f paused
	I0815 17:07:31.534562  385407 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 17:07:31.537462  385407 out.go:177] * Done! kubectl is now configured to use "addons-703024" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.907917904Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7559cbf597-gwb2j from CNI network \"kindnet\" (type=ptp)"
	Aug 15 17:11:06 addons-703024 crio[1033]: time="2024-08-15 17:11:06.945934238Z" level=info msg="Stopped pod sandbox: f878ace8a058203facb2add220fa77f876ec82f1febe14512489d13b4a49a653" id=e1282220-ffb3-4540-a1a5-3f1b6257c3ae name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:07 addons-703024 crio[1033]: time="2024-08-15 17:11:07.156925026Z" level=info msg="Removing container: c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3" id=3dff452b-555c-45bc-9117-132f11d37388 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 17:11:07 addons-703024 crio[1033]: time="2024-08-15 17:11:07.170967246Z" level=info msg="Removed container c22f268eef9dfa76de7138a4e463794f3ecc68aa241347dcc50ac8a007bdd3a3: ingress-nginx/ingress-nginx-controller-7559cbf597-gwb2j/controller" id=3dff452b-555c-45bc-9117-132f11d37388 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.110216497Z" level=info msg="Removing container: 689321e5039a26a4d3ec560781786f623f595419e18bcd3f665a699cdc7d4af9" id=d7fccaaa-80cf-4c89-baaf-8230f66add07 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.122568966Z" level=info msg="Removed container 689321e5039a26a4d3ec560781786f623f595419e18bcd3f665a699cdc7d4af9: ingress-nginx/ingress-nginx-admission-patch-gswvm/patch" id=d7fccaaa-80cf-4c89-baaf-8230f66add07 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.123684635Z" level=info msg="Removing container: 5ca7783051242815e6ba7bb5b35f6170e95b62454cc7f57320e912c84cb5f201" id=f8475f09-8df0-4c30-9d14-035c69cc498d name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.137820242Z" level=info msg="Removed container 5ca7783051242815e6ba7bb5b35f6170e95b62454cc7f57320e912c84cb5f201: ingress-nginx/ingress-nginx-admission-create-b729r/create" id=f8475f09-8df0-4c30-9d14-035c69cc498d name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.139208094Z" level=info msg="Stopping pod sandbox: f878ace8a058203facb2add220fa77f876ec82f1febe14512489d13b4a49a653" id=0387a5d5-09bd-40b8-a759-7159430b99b0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.139239266Z" level=info msg="Stopped pod sandbox (already stopped): f878ace8a058203facb2add220fa77f876ec82f1febe14512489d13b4a49a653" id=0387a5d5-09bd-40b8-a759-7159430b99b0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.139502620Z" level=info msg="Removing pod sandbox: f878ace8a058203facb2add220fa77f876ec82f1febe14512489d13b4a49a653" id=a15699cd-9510-4bd8-a1b7-4221b567a7c7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.145371731Z" level=info msg="Removed pod sandbox: f878ace8a058203facb2add220fa77f876ec82f1febe14512489d13b4a49a653" id=a15699cd-9510-4bd8-a1b7-4221b567a7c7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.145719450Z" level=info msg="Stopping pod sandbox: f83af2502cdb5c115b46b3f331a9f09f9fda05773eb9df51815f573e209bbc61" id=50d2c661-a47b-4179-aa01-56e4c077f188 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.145757378Z" level=info msg="Stopped pod sandbox (already stopped): f83af2502cdb5c115b46b3f331a9f09f9fda05773eb9df51815f573e209bbc61" id=50d2c661-a47b-4179-aa01-56e4c077f188 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.146100413Z" level=info msg="Removing pod sandbox: f83af2502cdb5c115b46b3f331a9f09f9fda05773eb9df51815f573e209bbc61" id=6bc0df98-a6a0-4d80-9acc-36408cfd8d1b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.152199958Z" level=info msg="Removed pod sandbox: f83af2502cdb5c115b46b3f331a9f09f9fda05773eb9df51815f573e209bbc61" id=6bc0df98-a6a0-4d80-9acc-36408cfd8d1b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.152495829Z" level=info msg="Stopping pod sandbox: 95b91c9934d00ab4136f95342dfc37ab27a73fe8cd6b030d5ec9bad2d5dcc0ef" id=19a49c36-db40-42f6-9355-b2935b4d77ab name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.152535972Z" level=info msg="Stopped pod sandbox (already stopped): 95b91c9934d00ab4136f95342dfc37ab27a73fe8cd6b030d5ec9bad2d5dcc0ef" id=19a49c36-db40-42f6-9355-b2935b4d77ab name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.152785281Z" level=info msg="Removing pod sandbox: 95b91c9934d00ab4136f95342dfc37ab27a73fe8cd6b030d5ec9bad2d5dcc0ef" id=92972614-6d2d-4317-9c5e-2221ae959aae name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.158091111Z" level=info msg="Removed pod sandbox: 95b91c9934d00ab4136f95342dfc37ab27a73fe8cd6b030d5ec9bad2d5dcc0ef" id=92972614-6d2d-4317-9c5e-2221ae959aae name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.158394677Z" level=info msg="Stopping pod sandbox: 0df8d50c66141e604aa8c949f26a1d038e9c7498a293b578744391b0e47802c1" id=432dc029-51e7-493f-8b85-5bc4ccbbe963 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.158432640Z" level=info msg="Stopped pod sandbox (already stopped): 0df8d50c66141e604aa8c949f26a1d038e9c7498a293b578744391b0e47802c1" id=432dc029-51e7-493f-8b85-5bc4ccbbe963 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.158704979Z" level=info msg="Removing pod sandbox: 0df8d50c66141e604aa8c949f26a1d038e9c7498a293b578744391b0e47802c1" id=d392a3fa-4ebb-4585-9daf-189665220a0d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 17:11:47 addons-703024 crio[1033]: time="2024-08-15 17:11:47.164398943Z" level=info msg="Removed pod sandbox: 0df8d50c66141e604aa8c949f26a1d038e9c7498a293b578744391b0e47802c1" id=d392a3fa-4ebb-4585-9daf-189665220a0d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 15 17:13:35 addons-703024 crio[1033]: time="2024-08-15 17:13:35.846890804Z" level=info msg="Stopping container: f9ecaf2bf81c510bd415ab2b484917741858e83ddc7d417ed8e85a6f602ff034 (timeout: 30s)" id=0cc43350-a908-4a00-9e94-036f5e8bf37d name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4085bf7aee9cf       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   e5ed20a3b12db       hello-world-app-55bf9c44b4-snj2m
	7ecc6adc40013       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   f4934699c8ef7       nginx
	2a9c8bcf784b3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   dd584d9ef798a       busybox
	f9ecaf2bf81c5       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   0654c245bdc8a       metrics-server-8988944d9-flc8s
	e91e474418831       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   ea09e370c8832       coredns-6f6b679f8f-qkxj6
	2367ce375da91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   10d3bce2278c3       storage-provisioner
	3cad0bae577bb       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                      7 minutes ago       Running             kindnet-cni               0                   b827c5f30f7ae       kindnet-c9vlm
	a2610cc2f65a0       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   2c5ece15c945e       kube-proxy-nsvg6
	ebb1bbdb3320c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        7 minutes ago       Running             kube-scheduler            0                   de4bf6c6026a6       kube-scheduler-addons-703024
	71100fb2e4a17       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        7 minutes ago       Running             kube-controller-manager   0                   d5be8c5a21aaa       kube-controller-manager-addons-703024
	3c5f0d2c0cdcd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   3dad7d545c672       etcd-addons-703024
	3b76e391faf0b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        7 minutes ago       Running             kube-apiserver            0                   f8c1a94595bf8       kube-apiserver-addons-703024
	
	
	==> coredns [e91e474418831ede7396ddd0df1155d285466dd8f4fb978277cc65815a349107] <==
	[INFO] 10.244.0.18:44457 - 45960 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067958s
	[INFO] 10.244.0.18:41997 - 2442 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004328647s
	[INFO] 10.244.0.18:41997 - 17286 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004745026s
	[INFO] 10.244.0.18:54351 - 12249 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005476729s
	[INFO] 10.244.0.18:54351 - 60380 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006174456s
	[INFO] 10.244.0.18:38182 - 3648 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005495164s
	[INFO] 10.244.0.18:38182 - 5959 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006003871s
	[INFO] 10.244.0.18:34511 - 43369 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077113s
	[INFO] 10.244.0.18:34511 - 6762 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000135304s
	[INFO] 10.244.0.21:53443 - 15524 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000188799s
	[INFO] 10.244.0.21:43106 - 30021 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000242695s
	[INFO] 10.244.0.21:33523 - 58160 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125607s
	[INFO] 10.244.0.21:60433 - 13007 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017985s
	[INFO] 10.244.0.21:57274 - 41431 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085854s
	[INFO] 10.244.0.21:34080 - 31342 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148649s
	[INFO] 10.244.0.21:41356 - 29279 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005300217s
	[INFO] 10.244.0.21:38023 - 38694 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005509356s
	[INFO] 10.244.0.21:60440 - 42289 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005143971s
	[INFO] 10.244.0.21:47892 - 44613 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006812477s
	[INFO] 10.244.0.21:36159 - 61670 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005681369s
	[INFO] 10.244.0.21:49914 - 37324 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006294609s
	[INFO] 10.244.0.21:39400 - 26494 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00078324s
	[INFO] 10.244.0.21:52668 - 23496 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000865943s
	[INFO] 10.244.0.26:47241 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000220749s
	[INFO] 10.244.0.26:42791 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000119979s
	
	
	==> describe nodes <==
	Name:               addons-703024
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-703024
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=addons-703024
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_05_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-703024
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:05:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-703024
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:13:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:11:22 +0000   Thu, 15 Aug 2024 17:05:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:11:22 +0000   Thu, 15 Aug 2024 17:05:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:11:22 +0000   Thu, 15 Aug 2024 17:05:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:11:22 +0000   Thu, 15 Aug 2024 17:06:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-703024
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 1aa33dd6d4c249a48c60190f74f2479d
	  System UUID:                bf551e8d-2b73-4bbf-8d69-9efc34772b05
	  Boot ID:                    2d86d768-5fa6-4bed-a8b9-fa4131d6b0e8
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  default                     hello-world-app-55bf9c44b4-snj2m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 coredns-6f6b679f8f-qkxj6                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m44s
	  kube-system                 etcd-addons-703024                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m50s
	  kube-system                 kindnet-c9vlm                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m44s
	  kube-system                 kube-apiserver-addons-703024             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m50s
	  kube-system                 kube-controller-manager-addons-703024    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m50s
	  kube-system                 kube-proxy-nsvg6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  kube-system                 kube-scheduler-addons-703024             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m50s
	  kube-system                 metrics-server-8988944d9-flc8s           100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m41s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m40s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  7m55s (x8 over 7m55s)  kubelet          Node addons-703024 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m55s (x8 over 7m55s)  kubelet          Node addons-703024 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m55s (x7 over 7m55s)  kubelet          Node addons-703024 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m50s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m50s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m50s                  kubelet          Node addons-703024 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m50s                  kubelet          Node addons-703024 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m50s                  kubelet          Node addons-703024 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m45s                  node-controller  Node addons-703024 event: Registered Node addons-703024 in Controller
	  Normal   NodeReady                7m26s                  kubelet          Node addons-703024 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 02 42 7e dc ac 84 02 42 c0 a8 5e 02 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b84f812507c4
	[  +0.000003] ll header: 00000000: 02 42 7e dc ac 84 02 42 c0 a8 5e 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b84f812507c4
	[  +0.000002] ll header: 00000000: 02 42 7e dc ac 84 02 42 c0 a8 5e 02 08 00
	[Aug15 16:15] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-12dfa1aa7ae6
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-12dfa1aa7ae6
	[  +0.000005] ll header: 00000000: 02 42 9e 55 12 5a 02 42 c0 a8 55 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 9e 55 12 5a 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-12dfa1aa7ae6
	[  +0.000001] ll header: 00000000: 02 42 9e 55 12 5a 02 42 c0 a8 55 02 08 00
	[Aug15 17:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[  +1.027553] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[  +2.015829] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[  +4.191667] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[Aug15 17:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[ +16.126812] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	[ +33.277609] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 6a dc 63 eb 84 fa c2 89 6c 3a 09 45 08 00
	
	
	==> etcd [3c5f0d2c0cdcd18717d218a8a95214fc8eb7e5b6c68c46e35ec50a0236a4aa03] <==
	{"level":"info","ts":"2024-08-15T17:05:54.854364Z","caller":"traceutil/trace.go:171","msg":"trace[681308291] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:416; }","duration":"187.683267ms","start":"2024-08-15T17:05:54.666665Z","end":"2024-08-15T17:05:54.854348Z","steps":["trace[681308291] 'agreement among raft nodes before linearized reading'  (duration: 187.454614ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:54.854237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.105941ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:05:54.854814Z","caller":"traceutil/trace.go:171","msg":"trace[1328195484] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:416; }","duration":"190.681149ms","start":"2024-08-15T17:05:54.664120Z","end":"2024-08-15T17:05:54.854801Z","steps":["trace[1328195484] 'agreement among raft nodes before linearized reading'  (duration: 190.088655ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:54.854312Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.389175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3144"}
	{"level":"info","ts":"2024-08-15T17:05:54.855197Z","caller":"traceutil/trace.go:171","msg":"trace[2066636726] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:416; }","duration":"190.267686ms","start":"2024-08-15T17:05:54.664918Z","end":"2024-08-15T17:05:54.855185Z","steps":["trace[2066636726] 'agreement among raft nodes before linearized reading'  (duration: 189.343554ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:54.871244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.876512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:140"}
	{"level":"info","ts":"2024-08-15T17:05:54.876838Z","caller":"traceutil/trace.go:171","msg":"trace[1225753751] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:423; }","duration":"111.470103ms","start":"2024-08-15T17:05:54.765347Z","end":"2024-08-15T17:05:54.876817Z","steps":["trace[1225753751] 'agreement among raft nodes before linearized reading'  (duration: 104.390693ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:05:55.258919Z","caller":"traceutil/trace.go:171","msg":"trace[1448782441] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"182.470761ms","start":"2024-08-15T17:05:55.076429Z","end":"2024-08-15T17:05:55.258899Z","steps":["trace[1448782441] 'process raft request'  (duration: 178.385492ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:05:55.258984Z","caller":"traceutil/trace.go:171","msg":"trace[1592769233] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"105.759591ms","start":"2024-08-15T17:05:55.153206Z","end":"2024-08-15T17:05:55.258966Z","steps":["trace[1592769233] 'process raft request'  (duration: 105.338276ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:05:55.259081Z","caller":"traceutil/trace.go:171","msg":"trace[395357935] linearizableReadLoop","detail":"{readStateIndex:448; appliedIndex:446; }","duration":"105.956359ms","start":"2024-08-15T17:05:55.153115Z","end":"2024-08-15T17:05:55.259071Z","steps":["trace[395357935] 'read index received'  (duration: 1.570649ms)","trace[395357935] 'applied index is now lower than readState.Index'  (duration: 104.384991ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T17:05:55.259152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.02251ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-nsvg6\" ","response":"range_response_count:1 size:4833"}
	{"level":"info","ts":"2024-08-15T17:05:55.260173Z","caller":"traceutil/trace.go:171","msg":"trace[2126902612] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-nsvg6; range_end:; response_count:1; response_revision:439; }","duration":"107.052358ms","start":"2024-08-15T17:05:55.153109Z","end":"2024-08-15T17:05:55.260161Z","steps":["trace[2126902612] 'agreement among raft nodes before linearized reading'  (duration: 105.98928ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:55.260341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.029767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-15T17:05:55.260395Z","caller":"traceutil/trace.go:171","msg":"trace[844264442] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:439; }","duration":"107.091451ms","start":"2024-08-15T17:05:55.153295Z","end":"2024-08-15T17:05:55.260386Z","steps":["trace[844264442] 'agreement among raft nodes before linearized reading'  (duration: 107.003589ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:05:55.259178Z","caller":"traceutil/trace.go:171","msg":"trace[882611892] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"105.815786ms","start":"2024-08-15T17:05:55.153356Z","end":"2024-08-15T17:05:55.259172Z","steps":["trace[882611892] 'process raft request'  (duration: 105.253992ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:55.260919Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.863087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:05:55.259203Z","caller":"traceutil/trace.go:171","msg":"trace[242474293] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"101.056527ms","start":"2024-08-15T17:05:55.158141Z","end":"2024-08-15T17:05:55.259198Z","steps":["trace[242474293] 'process raft request'  (duration: 100.50698ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:05:55.264368Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.900635ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3454"}
	{"level":"info","ts":"2024-08-15T17:05:55.265865Z","caller":"traceutil/trace.go:171","msg":"trace[496978317] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:439; }","duration":"112.39925ms","start":"2024-08-15T17:05:55.153452Z","end":"2024-08-15T17:05:55.265851Z","steps":["trace[496978317] 'agreement among raft nodes before linearized reading'  (duration: 110.711625ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:05:55.266064Z","caller":"traceutil/trace.go:171","msg":"trace[91201434] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:439; }","duration":"110.009431ms","start":"2024-08-15T17:05:55.156045Z","end":"2024-08-15T17:05:55.266054Z","steps":["trace[91201434] 'agreement among raft nodes before linearized reading'  (duration: 104.849343ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:07:00.465409Z","caller":"traceutil/trace.go:171","msg":"trace[363298197] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"102.704028ms","start":"2024-08-15T17:07:00.362681Z","end":"2024-08-15T17:07:00.465385Z","steps":["trace[363298197] 'process raft request'  (duration: 84.870409ms)","trace[363298197] 'compare'  (duration: 17.678593ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T17:07:00.465506Z","caller":"traceutil/trace.go:171","msg":"trace[1027133607] transaction","detail":"{read_only:false; response_revision:1237; number_of_response:1; }","duration":"100.730515ms","start":"2024-08-15T17:07:00.364755Z","end":"2024-08-15T17:07:00.465486Z","steps":["trace[1027133607] 'process raft request'  (duration: 100.576118ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:07:06.088456Z","caller":"traceutil/trace.go:171","msg":"trace[1404637144] transaction","detail":"{read_only:false; response_revision:1257; number_of_response:1; }","duration":"112.660711ms","start":"2024-08-15T17:07:05.975767Z","end":"2024-08-15T17:07:06.088428Z","steps":["trace[1404637144] 'process raft request'  (duration: 111.880074ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:08:41.397318Z","caller":"traceutil/trace.go:171","msg":"trace[1016833019] transaction","detail":"{read_only:false; response_revision:1906; number_of_response:1; }","duration":"106.662872ms","start":"2024-08-15T17:08:41.290633Z","end":"2024-08-15T17:08:41.397296Z","steps":["trace[1016833019] 'process raft request'  (duration: 106.027956ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:08:41.397352Z","caller":"traceutil/trace.go:171","msg":"trace[994476116] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1907; }","duration":"106.148462ms","start":"2024-08-15T17:08:41.291185Z","end":"2024-08-15T17:08:41.397334Z","steps":["trace[994476116] 'process raft request'  (duration: 105.992369ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:13:37 up  1:56,  0 users,  load average: 0.08, 0.28, 0.28
	Linux addons-703024 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3cad0bae577bbdaf8a446b7e433bf5b0b2741a29958289509c498a1c3089c358] <==
	E0815 17:12:22.697984       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 17:12:30.253456       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:12:30.253493       1 main.go:299] handling current node
	I0815 17:12:40.253981       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:12:40.254014       1 main.go:299] handling current node
	W0815 17:12:43.516259       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 17:12:43.516293       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0815 17:12:45.273946       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 17:12:45.273976       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 17:12:50.253199       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:12:50.253239       1 main.go:299] handling current node
	W0815 17:12:58.134766       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:12:58.134797       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 17:13:00.253479       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:13:00.253518       1 main.go:299] handling current node
	I0815 17:13:10.253427       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:13:10.253477       1 main.go:299] handling current node
	W0815 17:13:19.305728       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 17:13:19.305762       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 17:13:20.253434       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:13:20.253478       1 main.go:299] handling current node
	W0815 17:13:25.189793       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 17:13:25.189854       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 17:13:30.253500       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:13:30.253538       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3b76e391faf0b4585f2b9fabf5b70eeab8ceafa19a3436bda15a132827ab50cc] <==
	E0815 17:07:58.227636       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0815 17:07:58.232817       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0815 17:08:13.234265       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0815 17:08:13.871541       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0815 17:08:14.675872       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:40078: read: connection reset by peer
	E0815 17:08:14.680897       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54356: use of closed network connection
	I0815 17:08:17.917759       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.196.218"}
	I0815 17:08:40.690176       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 17:08:40.865091       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.95.210"}
	I0815 17:08:41.185500       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 17:08:42.400462       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0815 17:08:43.066391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:08:43.066447       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:08:43.173728       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:08:43.173783       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:08:43.254718       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:08:43.254765       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:08:43.264029       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:08:43.264078       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:08:43.267449       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:08:43.267876       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0815 17:08:44.255030       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0815 17:08:44.268130       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0815 17:08:44.464760       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0815 17:11:01.806310       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.107.246"}
	
	
	==> kube-controller-manager [71100fb2e4a1737b4d407d3e6212064a6faae5badc5df812b24256771b12fc3f] <==
	E0815 17:11:21.350327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 17:11:22.725297       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-703024"
	W0815 17:11:33.642896       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:11:33.642940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:11:50.986771       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:11:50.986818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:11:54.312051       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:11:54.312097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:12:08.168425       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:12:08.168466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:12:23.920349       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:12:23.920392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:12:32.430444       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:12:32.430488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:12:40.366066       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:12:40.366112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:12:55.117547       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:12:55.117589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:13:06.414600       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:13:06.414645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:13:13.591451       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:13:13.591498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:13:27.830125       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:13:27.830166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 17:13:35.837720       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="8.424µs"
	
	
	==> kube-proxy [a2610cc2f65a0f81a7d692e48a0aa38d331a9efd13583acf7295df7440532f9c] <==
	I0815 17:05:55.364792       1 server_linux.go:66] "Using iptables proxy"
	I0815 17:05:56.072374       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0815 17:05:56.077051       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:05:56.473069       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0815 17:05:56.473237       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:05:56.476933       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:05:56.477666       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:05:56.477692       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:05:56.482443       1 config.go:197] "Starting service config controller"
	I0815 17:05:56.482467       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:05:56.482508       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:05:56.482512       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:05:56.482845       1 config.go:326] "Starting node config controller"
	I0815 17:05:56.482852       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:05:56.652874       1 shared_informer.go:320] Caches are synced for node config
	I0815 17:05:56.652923       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:05:56.653035       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ebb1bbdb3320c73a6253370354e17218d09d6d445d494531cac1dc130f2a3627] <==
	W0815 17:05:44.575955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:05:44.575979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.409182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:05:45.409220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.498688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 17:05:45.498735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.527119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:05:45.527167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.554826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:05:45.554872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.625783       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 17:05:45.625832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.649032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 17:05:45.649076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.668376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:05:45.668425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.677941       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 17:05:45.677977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.726301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 17:05:45.726344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.739847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 17:05:45.739917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:05:45.863217       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 17:05:45.863262       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 17:05:48.173234       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 17:12:07 addons-703024 kubelet[1646]: E0815 17:12:07.015426    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741927015141932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:12:17 addons-703024 kubelet[1646]: E0815 17:12:17.019540    1646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741937018873243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:12:17 addons-703024 kubelet[1646]: E0815 17:12:17.019587    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741937018873243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:12:20 addons-703024 kubelet[1646]: I0815 17:12:20.862346    1646 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 17:12:27 addons-703024 kubelet[1646]: E0815 17:12:27.022168    1646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741947021922041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:12:27 addons-703024 kubelet[1646]: E0815 17:12:27.022200    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741947021922041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:12:37 addons-703024 kubelet[1646]: E0815 17:12:37.024676    1646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741957024423044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:12:37 addons-703024 kubelet[1646]: E0815 17:12:37.024714    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741957024423044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:12:47 addons-703024 kubelet[1646]: E0815 17:12:47.027177    1646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741967026942551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:12:47 addons-703024 kubelet[1646]: E0815 17:12:47.027212    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741967026942551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:12:57 addons-703024 kubelet[1646]: E0815 17:12:57.030129    1646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741977029867118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:12:57 addons-703024 kubelet[1646]: E0815 17:12:57.030167    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741977029867118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:13:07 addons-703024 kubelet[1646]: E0815 17:13:07.032422    1646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741987032167004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:13:07 addons-703024 kubelet[1646]: E0815 17:13:07.032453    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741987032167004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:13:17 addons-703024 kubelet[1646]: E0815 17:13:17.035672    1646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741997035428481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:13:17 addons-703024 kubelet[1646]: E0815 17:13:17.035708    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723741997035428481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:13:23 addons-703024 kubelet[1646]: I0815 17:13:23.861804    1646 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-6f6b679f8f-qkxj6" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 17:13:27 addons-703024 kubelet[1646]: E0815 17:13:27.037711    1646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742007037496959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:13:27 addons-703024 kubelet[1646]: E0815 17:13:27.037741    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742007037496959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:13:37 addons-703024 kubelet[1646]: E0815 17:13:37.040613    1646 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742017040382601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:13:37 addons-703024 kubelet[1646]: E0815 17:13:37.040654    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742017040382601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:613264,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:13:37 addons-703024 kubelet[1646]: I0815 17:13:37.189972    1646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnfbz\" (UniqueName: \"kubernetes.io/projected/1b94ea1a-e1d1-45d5-ba12-31457ddd2aab-kube-api-access-mnfbz\") pod \"1b94ea1a-e1d1-45d5-ba12-31457ddd2aab\" (UID: \"1b94ea1a-e1d1-45d5-ba12-31457ddd2aab\") "
	Aug 15 17:13:37 addons-703024 kubelet[1646]: I0815 17:13:37.190027    1646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1b94ea1a-e1d1-45d5-ba12-31457ddd2aab-tmp-dir\") pod \"1b94ea1a-e1d1-45d5-ba12-31457ddd2aab\" (UID: \"1b94ea1a-e1d1-45d5-ba12-31457ddd2aab\") "
	Aug 15 17:13:37 addons-703024 kubelet[1646]: I0815 17:13:37.190302    1646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b94ea1a-e1d1-45d5-ba12-31457ddd2aab-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "1b94ea1a-e1d1-45d5-ba12-31457ddd2aab" (UID: "1b94ea1a-e1d1-45d5-ba12-31457ddd2aab"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 15 17:13:37 addons-703024 kubelet[1646]: I0815 17:13:37.191791    1646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b94ea1a-e1d1-45d5-ba12-31457ddd2aab-kube-api-access-mnfbz" (OuterVolumeSpecName: "kube-api-access-mnfbz") pod "1b94ea1a-e1d1-45d5-ba12-31457ddd2aab" (UID: "1b94ea1a-e1d1-45d5-ba12-31457ddd2aab"). InnerVolumeSpecName "kube-api-access-mnfbz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	
	
	==> storage-provisioner [2367ce375da91bbe5b92ba7e6ed79bebfc4f04ff85717728ddd65239f23388bc] <==
	I0815 17:06:11.402385       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 17:06:11.456627       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 17:06:11.456681       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 17:06:11.465445       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 17:06:11.465625       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-703024_1c0de0d6-d953-4030-88da-526a4eb6bff7!
	I0815 17:06:11.466374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ffb4816b-b285-4153-8f69-80ad7ec9bddb", APIVersion:"v1", ResourceVersion:"938", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-703024_1c0de0d6-d953-4030-88da-526a4eb6bff7 became leader
	I0815 17:06:11.566107       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-703024_1c0de0d6-d953-4030-88da-526a4eb6bff7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-703024 -n addons-703024
helpers_test.go:261: (dbg) Run:  kubectl --context addons-703024 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (331.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-605215 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.834270883s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-605215 image ls: (2.243535219s)
functional_test.go:446: expected "kicbase/echo-server:functional-605215" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-896691 node delete m03 -v=7 --alsologtostderr: (11.347179321s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:516: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-896691       NotReady   control-plane   6m9s    v1.31.0
	ha-896691-m02   Ready      control-plane   5m46s   v1.31.0
	ha-896691-m04   Ready      <none>          4m31s   v1.31.0

                                                
                                                
-- /stdout --
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:524: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-896691
helpers_test.go:235: (dbg) docker inspect ha-896691:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9db8034efc52b5b26080c945d3420002981adb79d465d69f932658fe861d8aa",
	        "Created": "2024-08-15T17:17:18.932847586Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 454609,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T17:21:05.172442633Z",
	            "FinishedAt": "2024-08-15T17:21:04.53757235Z"
	        },
	        "Image": "sha256:49d4702e5c94195d7796cb79f5fbc9d7cc584c1c41f3c58bf1694d1da009b2f6",
	        "ResolvConfPath": "/var/lib/docker/containers/b9db8034efc52b5b26080c945d3420002981adb79d465d69f932658fe861d8aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9db8034efc52b5b26080c945d3420002981adb79d465d69f932658fe861d8aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9db8034efc52b5b26080c945d3420002981adb79d465d69f932658fe861d8aa/hosts",
	        "LogPath": "/var/lib/docker/containers/b9db8034efc52b5b26080c945d3420002981adb79d465d69f932658fe861d8aa/b9db8034efc52b5b26080c945d3420002981adb79d465d69f932658fe861d8aa-json.log",
	        "Name": "/ha-896691",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-896691:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-896691",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2b02bbebd92d2ed2f28b1b7dcee03c414896ac75d132fc312854bd8976c0059a-init/diff:/var/lib/docker/overlay2/debad26787101f2e0bd77abae2a4f62ccd76a5180cc196365483720250fb2357/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b02bbebd92d2ed2f28b1b7dcee03c414896ac75d132fc312854bd8976c0059a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b02bbebd92d2ed2f28b1b7dcee03c414896ac75d132fc312854bd8976c0059a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b02bbebd92d2ed2f28b1b7dcee03c414896ac75d132fc312854bd8976c0059a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-896691",
	                "Source": "/var/lib/docker/volumes/ha-896691/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-896691",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-896691",
	                "name.minikube.sigs.k8s.io": "ha-896691",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e4795e96de2e45af06891052aa471e333ef40a5e308552d89bb3783c231a8914",
	            "SandboxKey": "/var/run/docker/netns/e4795e96de2e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-896691": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f725c910872f0b91f94e8b8edbecf6a50f16c9dfa021d1d70eb6ecd7a116f426",
	                    "EndpointID": "efc1a24de79c1dd814aa50076f7ddc75d4b7caaafb2cc175f8e7b5169b07dcf9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-896691",
	                        "b9db8034efc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-896691 -n ha-896691
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-896691 logs -n 25: (1.315403911s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-896691 ssh -n                                                                | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n ha-896691-m02 sudo cat                                         | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | /home/docker/cp-test_ha-896691-m03_ha-896691-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-896691 cp ha-896691-m03:/home/docker/cp-test.txt                             | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m04:/home/docker/cp-test_ha-896691-m03_ha-896691-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n                                                                | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n ha-896691-m04 sudo cat                                         | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | /home/docker/cp-test_ha-896691-m03_ha-896691-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-896691 cp testdata/cp-test.txt                                               | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n                                                                | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-896691 cp ha-896691-m04:/home/docker/cp-test.txt                             | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile878299019/001/cp-test_ha-896691-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n                                                                | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-896691 cp ha-896691-m04:/home/docker/cp-test.txt                             | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691:/home/docker/cp-test_ha-896691-m04_ha-896691.txt                      |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n                                                                | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n ha-896691 sudo cat                                             | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | /home/docker/cp-test_ha-896691-m04_ha-896691.txt                                |           |         |         |                     |                     |
	| cp      | ha-896691 cp ha-896691-m04:/home/docker/cp-test.txt                             | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m02:/home/docker/cp-test_ha-896691-m04_ha-896691-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n                                                                | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n ha-896691-m02 sudo cat                                         | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | /home/docker/cp-test_ha-896691-m04_ha-896691-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-896691 cp ha-896691-m04:/home/docker/cp-test.txt                             | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m03:/home/docker/cp-test_ha-896691-m04_ha-896691-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n                                                                | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | ha-896691-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-896691 ssh -n ha-896691-m03 sudo cat                                         | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:19 UTC |
	|         | /home/docker/cp-test_ha-896691-m04_ha-896691-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-896691 node stop m02 -v=7                                                    | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:19 UTC | 15 Aug 24 17:20 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-896691 node start m02 -v=7                                                   | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:20 UTC | 15 Aug 24 17:20 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-896691 -v=7                                                          | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:20 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-896691 -v=7                                                               | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:20 UTC | 15 Aug 24 17:21 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-896691 --wait=true -v=7                                                   | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:21 UTC | 15 Aug 24 17:23 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-896691                                                               | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:23 UTC |                     |
	| node    | ha-896691 node delete m03 -v=7                                                  | ha-896691 | jenkins | v1.33.1 | 15 Aug 24 17:23 UTC | 15 Aug 24 17:23 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:21:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:21:04.833643  454315 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:21:04.833912  454315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:21:04.833923  454315 out.go:358] Setting ErrFile to fd 2...
	I0815 17:21:04.833928  454315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:21:04.834202  454315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:21:04.834760  454315 out.go:352] Setting JSON to false
	I0815 17:21:04.835757  454315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7417,"bootTime":1723735048,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:21:04.835819  454315 start.go:139] virtualization: kvm guest
	I0815 17:21:04.838794  454315 out.go:177] * [ha-896691] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:21:04.839959  454315 notify.go:220] Checking for updates...
	I0815 17:21:04.839999  454315 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:21:04.841270  454315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:21:04.842421  454315 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:21:04.843617  454315 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	I0815 17:21:04.844923  454315 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:21:04.846134  454315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:21:04.847701  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:21:04.847781  454315 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:21:04.870420  454315 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:21:04.870569  454315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:21:04.916359  454315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2024-08-15 17:21:04.907761461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:21:04.916461  454315 docker.go:307] overlay module found
	I0815 17:21:04.918327  454315 out.go:177] * Using the docker driver based on existing profile
	I0815 17:21:04.919468  454315 start.go:297] selected driver: docker
	I0815 17:21:04.919486  454315 start.go:901] validating driver "docker" against &{Name:ha-896691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-896691 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:21:04.919621  454315 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:21:04.919695  454315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:21:04.966713  454315 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2024-08-15 17:21:04.957194659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:21:04.967370  454315 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:21:04.967445  454315 cni.go:84] Creating CNI manager for ""
	I0815 17:21:04.967461  454315 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 17:21:04.967521  454315 start.go:340] cluster config:
	{Name:ha-896691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-896691 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:21:04.969417  454315 out.go:177] * Starting "ha-896691" primary control-plane node in "ha-896691" cluster
	I0815 17:21:04.970536  454315 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 17:21:04.971720  454315 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:21:04.973038  454315 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:21:04.973074  454315 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:21:04.973083  454315 cache.go:56] Caching tarball of preloaded images
	I0815 17:21:04.973120  454315 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:21:04.973166  454315 preload.go:172] Found /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:21:04.973176  454315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:21:04.973291  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	W0815 17:21:04.992360  454315 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 17:21:04.992387  454315 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:21:04.992465  454315 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:21:04.992488  454315 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:21:04.992494  454315 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:21:04.992502  454315 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:21:04.992509  454315 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 17:21:04.993599  454315 image.go:273] response: 
	I0815 17:21:05.052153  454315 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 17:21:05.052197  454315 cache.go:194] Successfully downloaded all kic artifacts
	I0815 17:21:05.052249  454315 start.go:360] acquireMachinesLock for ha-896691: {Name:mke16ca8fb2a6c10f367e1ad295206f7608c8abc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:21:05.052326  454315 start.go:364] duration metric: took 48.217µs to acquireMachinesLock for "ha-896691"
	I0815 17:21:05.052352  454315 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:21:05.052362  454315 fix.go:54] fixHost starting: 
	I0815 17:21:05.052648  454315 cli_runner.go:164] Run: docker container inspect ha-896691 --format={{.State.Status}}
	I0815 17:21:05.068964  454315 fix.go:112] recreateIfNeeded on ha-896691: state=Stopped err=<nil>
	W0815 17:21:05.068992  454315 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:21:05.070590  454315 out.go:177] * Restarting existing docker container for "ha-896691" ...
	I0815 17:21:05.071705  454315 cli_runner.go:164] Run: docker start ha-896691
	I0815 17:21:05.316687  454315 cli_runner.go:164] Run: docker container inspect ha-896691 --format={{.State.Status}}
	I0815 17:21:05.334485  454315 kic.go:430] container "ha-896691" state is running.
	I0815 17:21:05.334950  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691
	I0815 17:21:05.352490  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	I0815 17:21:05.352795  454315 machine.go:93] provisionDockerMachine start ...
	I0815 17:21:05.352882  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:21:05.371107  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:21:05.371352  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0815 17:21:05.371369  454315 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:21:05.371954  454315 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36402->127.0.0.1:33178: read: connection reset by peer
	I0815 17:21:08.503801  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896691
	
	I0815 17:21:08.503832  454315 ubuntu.go:169] provisioning hostname "ha-896691"
	I0815 17:21:08.503891  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:21:08.521219  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:21:08.521438  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0815 17:21:08.521461  454315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-896691 && echo "ha-896691" | sudo tee /etc/hostname
	I0815 17:21:08.666790  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896691
	
	I0815 17:21:08.666896  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:21:08.684725  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:21:08.684927  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0815 17:21:08.684947  454315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-896691' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-896691/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-896691' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:21:08.816396  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:21:08.816433  454315 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19450-377193/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-377193/.minikube}
	I0815 17:21:08.816491  454315 ubuntu.go:177] setting up certificates
	I0815 17:21:08.816509  454315 provision.go:84] configureAuth start
	I0815 17:21:08.816611  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691
	I0815 17:21:08.832697  454315 provision.go:143] copyHostCerts
	I0815 17:21:08.832736  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem
	I0815 17:21:08.832765  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem, removing ...
	I0815 17:21:08.832774  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem
	I0815 17:21:08.832841  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem (1078 bytes)
	I0815 17:21:08.832911  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem
	I0815 17:21:08.832929  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem, removing ...
	I0815 17:21:08.832935  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem
	I0815 17:21:08.832961  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem (1123 bytes)
	I0815 17:21:08.833000  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem
	I0815 17:21:08.833016  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem, removing ...
	I0815 17:21:08.833022  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem
	I0815 17:21:08.833042  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem (1675 bytes)
	I0815 17:21:08.833095  454315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem org=jenkins.ha-896691 san=[127.0.0.1 192.168.49.2 ha-896691 localhost minikube]
	I0815 17:21:08.903723  454315 provision.go:177] copyRemoteCerts
	I0815 17:21:08.903783  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:21:08.903829  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:21:08.920534  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691/id_rsa Username:docker}
	I0815 17:21:09.012789  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:21:09.012853  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 17:21:09.033573  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:21:09.033662  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 17:21:09.054147  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:21:09.054219  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:21:09.074596  454315 provision.go:87] duration metric: took 258.068692ms to configureAuth
	I0815 17:21:09.074625  454315 ubuntu.go:193] setting minikube options for container-runtime
	I0815 17:21:09.074900  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:21:09.075121  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:21:09.091676  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:21:09.091884  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0815 17:21:09.091906  454315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:21:09.409656  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:21:09.409686  454315 machine.go:96] duration metric: took 4.056868813s to provisionDockerMachine
	I0815 17:21:09.409700  454315 start.go:293] postStartSetup for "ha-896691" (driver="docker")
	I0815 17:21:09.409714  454315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:21:09.409773  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:21:09.409824  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:21:09.427898  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691/id_rsa Username:docker}
	I0815 17:21:09.520937  454315 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:21:09.523906  454315 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 17:21:09.523940  454315 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 17:21:09.523952  454315 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 17:21:09.523962  454315 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 17:21:09.523979  454315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/addons for local assets ...
	I0815 17:21:09.524041  454315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/files for local assets ...
	I0815 17:21:09.524142  454315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> 3840912.pem in /etc/ssl/certs
	I0815 17:21:09.524155  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> /etc/ssl/certs/3840912.pem
	I0815 17:21:09.524298  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:21:09.531904  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem --> /etc/ssl/certs/3840912.pem (1708 bytes)
	I0815 17:21:09.552021  454315 start.go:296] duration metric: took 142.305579ms for postStartSetup
	I0815 17:21:09.552111  454315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:21:09.552148  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:21:09.569541  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691/id_rsa Username:docker}
	I0815 17:21:09.661134  454315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 17:21:09.665144  454315 fix.go:56] duration metric: took 4.612778166s for fixHost
	I0815 17:21:09.665174  454315 start.go:83] releasing machines lock for "ha-896691", held for 4.612832332s
	I0815 17:21:09.665238  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691
	I0815 17:21:09.681474  454315 ssh_runner.go:195] Run: cat /version.json
	I0815 17:21:09.681518  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:21:09.681574  454315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:21:09.681641  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:21:09.698200  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691/id_rsa Username:docker}
	I0815 17:21:09.699103  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691/id_rsa Username:docker}
	I0815 17:21:09.857421  454315 ssh_runner.go:195] Run: systemctl --version
	I0815 17:21:09.861473  454315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:21:09.998661  454315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 17:21:10.002969  454315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:21:10.010962  454315 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 17:21:10.011027  454315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:21:10.018538  454315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 17:21:10.018558  454315 start.go:495] detecting cgroup driver to use...
	I0815 17:21:10.018598  454315 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 17:21:10.018632  454315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:21:10.029268  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:21:10.038670  454315 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:21:10.038720  454315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:21:10.049667  454315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:21:10.059375  454315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:21:10.133928  454315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:21:10.213932  454315 docker.go:233] disabling docker service ...
	I0815 17:21:10.214016  454315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:21:10.224884  454315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:21:10.234420  454315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:21:10.305929  454315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:21:10.377828  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:21:10.388046  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:21:10.402169  454315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:21:10.402239  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:10.410634  454315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:21:10.410704  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:10.419226  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:10.427525  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:10.436070  454315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:21:10.443933  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:10.452278  454315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:10.460287  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:10.468455  454315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:21:10.475599  454315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:21:10.482588  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:21:10.562176  454315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:21:10.662153  454315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:21:10.662222  454315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:21:10.665708  454315 start.go:563] Will wait 60s for crictl version
	I0815 17:21:10.665762  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:21:10.668858  454315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:21:10.702678  454315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 17:21:10.702770  454315 ssh_runner.go:195] Run: crio --version
	I0815 17:21:10.735452  454315 ssh_runner.go:195] Run: crio --version
	I0815 17:21:10.770323  454315 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 17:21:10.771341  454315 cli_runner.go:164] Run: docker network inspect ha-896691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:21:10.787430  454315 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 17:21:10.790776  454315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:21:10.800676  454315 kubeadm.go:883] updating cluster {Name:ha-896691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-896691 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:21:10.800856  454315 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:21:10.800922  454315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:21:10.840185  454315 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:21:10.840208  454315 crio.go:433] Images already preloaded, skipping extraction
	I0815 17:21:10.840259  454315 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:21:10.872026  454315 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:21:10.872049  454315 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:21:10.872060  454315 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0815 17:21:10.872166  454315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-896691 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-896691 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:21:10.872232  454315 ssh_runner.go:195] Run: crio config
	I0815 17:21:10.912675  454315 cni.go:84] Creating CNI manager for ""
	I0815 17:21:10.912705  454315 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 17:21:10.912716  454315 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:21:10.912738  454315 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-896691 NodeName:ha-896691 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:21:10.912873  454315 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-896691"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:21:10.912893  454315 kube-vip.go:115] generating kube-vip config ...
	I0815 17:21:10.912927  454315 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0815 17:21:10.924417  454315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:21:10.924525  454315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:21:10.924593  454315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:21:10.932026  454315 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:21:10.932076  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 17:21:10.939201  454315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0815 17:21:10.954325  454315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:21:10.969180  454315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0815 17:21:10.984160  454315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 17:21:10.998972  454315 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:21:11.002013  454315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:21:11.011638  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:21:11.085975  454315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:21:11.098391  454315 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691 for IP: 192.168.49.2
	I0815 17:21:11.098411  454315 certs.go:194] generating shared ca certs ...
	I0815 17:21:11.098429  454315 certs.go:226] acquiring lock for ca certs: {Name:mkf196aaefcb61003123eeb327e0f1a70bf4bfe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:21:11.098649  454315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key
	I0815 17:21:11.098695  454315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key
	I0815 17:21:11.098706  454315 certs.go:256] generating profile certs ...
	I0815 17:21:11.098815  454315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.key
	I0815 17:21:11.098847  454315 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key.13424654
	I0815 17:21:11.098866  454315 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt.13424654 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0815 17:21:11.189599  454315 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt.13424654 ...
	I0815 17:21:11.189627  454315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt.13424654: {Name:mk3e4c5c228b0c4f67a29c722cd321e843cec789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:21:11.189773  454315 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key.13424654 ...
	I0815 17:21:11.189786  454315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key.13424654: {Name:mkdf082dd2f457262ec7a7d54ec422f3059cb4a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:21:11.189859  454315 certs.go:381] copying /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt.13424654 -> /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt
	I0815 17:21:11.190009  454315 certs.go:385] copying /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key.13424654 -> /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key
	I0815 17:21:11.190141  454315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.key
	I0815 17:21:11.190158  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:21:11.190170  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:21:11.190184  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:21:11.190198  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:21:11.190210  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:21:11.190222  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:21:11.190234  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:21:11.190245  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:21:11.190294  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem (1338 bytes)
	W0815 17:21:11.190321  454315 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091_empty.pem, impossibly tiny 0 bytes
	I0815 17:21:11.190331  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 17:21:11.190351  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem (1078 bytes)
	I0815 17:21:11.190372  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:21:11.190392  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem (1675 bytes)
	I0815 17:21:11.190427  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem (1708 bytes)
	I0815 17:21:11.190451  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> /usr/share/ca-certificates/3840912.pem
	I0815 17:21:11.190465  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:21:11.190478  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem -> /usr/share/ca-certificates/384091.pem
	I0815 17:21:11.191081  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:21:11.212484  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:21:11.232735  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:21:11.254139  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 17:21:11.274704  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 17:21:11.296350  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 17:21:11.316568  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:21:11.336766  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:21:11.357179  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem --> /usr/share/ca-certificates/3840912.pem (1708 bytes)
	I0815 17:21:11.377605  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:21:11.397770  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem --> /usr/share/ca-certificates/384091.pem (1338 bytes)
	I0815 17:21:11.418003  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:21:11.433369  454315 ssh_runner.go:195] Run: openssl version
	I0815 17:21:11.438098  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3840912.pem && ln -fs /usr/share/ca-certificates/3840912.pem /etc/ssl/certs/3840912.pem"
	I0815 17:21:11.446172  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3840912.pem
	I0815 17:21:11.449151  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:14 /usr/share/ca-certificates/3840912.pem
	I0815 17:21:11.449198  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3840912.pem
	I0815 17:21:11.455304  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3840912.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:21:11.462967  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:21:11.470915  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:21:11.473941  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:21:11.473980  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:21:11.480136  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:21:11.487668  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384091.pem && ln -fs /usr/share/ca-certificates/384091.pem /etc/ssl/certs/384091.pem"
	I0815 17:21:11.495738  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384091.pem
	I0815 17:21:11.498768  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:14 /usr/share/ca-certificates/384091.pem
	I0815 17:21:11.498819  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384091.pem
	I0815 17:21:11.504771  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384091.pem /etc/ssl/certs/51391683.0"
	I0815 17:21:11.512190  454315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:21:11.515144  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 17:21:11.520959  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 17:21:11.526733  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 17:21:11.532438  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 17:21:11.538164  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 17:21:11.543865  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 17:21:11.549436  454315 kubeadm.go:392] StartCluster: {Name:ha-896691 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-896691 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:21:11.549543  454315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 17:21:11.549578  454315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:21:11.581415  454315 cri.go:89] found id: ""
	I0815 17:21:11.581469  454315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 17:21:11.589365  454315 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 17:21:11.589382  454315 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 17:21:11.589422  454315 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 17:21:11.596646  454315 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:21:11.597047  454315 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-896691" does not appear in /home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:21:11.597149  454315 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-377193/kubeconfig needs updating (will repair): [kubeconfig missing "ha-896691" cluster setting kubeconfig missing "ha-896691" context setting]
	I0815 17:21:11.597422  454315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/kubeconfig: {Name:mk661ec10a39902a1883ea9ee46c4be1d73fd858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:21:11.597837  454315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:21:11.598043  454315 kapi.go:59] client config for ha-896691: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.key", CAFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 17:21:11.598473  454315 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 17:21:11.598758  454315 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 17:21:11.605940  454315 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0815 17:21:11.605956  454315 kubeadm.go:597] duration metric: took 16.569024ms to restartPrimaryControlPlane
	I0815 17:21:11.605962  454315 kubeadm.go:394] duration metric: took 56.533904ms to StartCluster
	I0815 17:21:11.605975  454315 settings.go:142] acquiring lock: {Name:mke1aec41bab7354aae03597d79755a9c481f6a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:21:11.606018  454315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:21:11.606485  454315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/kubeconfig: {Name:mk661ec10a39902a1883ea9ee46c4be1d73fd858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:21:11.606662  454315 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:21:11.606689  454315 start.go:241] waiting for startup goroutines ...
	I0815 17:21:11.606706  454315 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 17:21:11.606914  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:21:11.610247  454315 out.go:177] * Enabled addons: 
	I0815 17:21:11.611339  454315 addons.go:510] duration metric: took 4.636457ms for enable addons: enabled=[]
	I0815 17:21:11.611374  454315 start.go:246] waiting for cluster config update ...
	I0815 17:21:11.611385  454315 start.go:255] writing updated cluster config ...
	I0815 17:21:11.612822  454315 out.go:201] 
	I0815 17:21:11.614102  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:21:11.614182  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	I0815 17:21:11.615628  454315 out.go:177] * Starting "ha-896691-m02" control-plane node in "ha-896691" cluster
	I0815 17:21:11.616745  454315 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 17:21:11.617764  454315 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:21:11.618781  454315 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:21:11.618796  454315 cache.go:56] Caching tarball of preloaded images
	I0815 17:21:11.618843  454315 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:21:11.618877  454315 preload.go:172] Found /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:21:11.618891  454315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:21:11.618978  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	W0815 17:21:11.637213  454315 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 17:21:11.637233  454315 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:21:11.637321  454315 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:21:11.637340  454315 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:21:11.637349  454315 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:21:11.637360  454315 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:21:11.637370  454315 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 17:21:11.638470  454315 image.go:273] response: 
	I0815 17:21:11.682818  454315 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 17:21:11.682870  454315 cache.go:194] Successfully downloaded all kic artifacts
	I0815 17:21:11.682923  454315 start.go:360] acquireMachinesLock for ha-896691-m02: {Name:mkfee6e66902ea2d9dd6e1a736b01d9b1fb3a23d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:21:11.682997  454315 start.go:364] duration metric: took 52.727µs to acquireMachinesLock for "ha-896691-m02"
	I0815 17:21:11.683021  454315 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:21:11.683029  454315 fix.go:54] fixHost starting: m02
	I0815 17:21:11.683267  454315 cli_runner.go:164] Run: docker container inspect ha-896691-m02 --format={{.State.Status}}
	I0815 17:21:11.700344  454315 fix.go:112] recreateIfNeeded on ha-896691-m02: state=Stopped err=<nil>
	W0815 17:21:11.700373  454315 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:21:11.702261  454315 out.go:177] * Restarting existing docker container for "ha-896691-m02" ...
	I0815 17:21:11.703419  454315 cli_runner.go:164] Run: docker start ha-896691-m02
	I0815 17:21:11.952978  454315 cli_runner.go:164] Run: docker container inspect ha-896691-m02 --format={{.State.Status}}
	I0815 17:21:11.971278  454315 kic.go:430] container "ha-896691-m02" state is running.
	I0815 17:21:11.971637  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m02
	I0815 17:21:11.988954  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	I0815 17:21:11.989212  454315 machine.go:93] provisionDockerMachine start ...
	I0815 17:21:11.989286  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m02
	I0815 17:21:12.006638  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:21:12.006832  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0815 17:21:12.006843  454315 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:21:12.007506  454315 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43372->127.0.0.1:33183: read: connection reset by peer
	I0815 17:21:15.140028  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896691-m02
	
	I0815 17:21:15.140055  454315 ubuntu.go:169] provisioning hostname "ha-896691-m02"
	I0815 17:21:15.140138  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m02
	I0815 17:21:15.157772  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:21:15.157955  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0815 17:21:15.157968  454315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-896691-m02 && echo "ha-896691-m02" | sudo tee /etc/hostname
	I0815 17:21:15.303289  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896691-m02
	
	I0815 17:21:15.303375  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m02
	I0815 17:21:15.320044  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:21:15.320233  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0815 17:21:15.320257  454315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-896691-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-896691-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-896691-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:21:15.452463  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:21:15.452492  454315 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19450-377193/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-377193/.minikube}
	I0815 17:21:15.452513  454315 ubuntu.go:177] setting up certificates
	I0815 17:21:15.452524  454315 provision.go:84] configureAuth start
	I0815 17:21:15.452605  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m02
	I0815 17:21:15.468869  454315 provision.go:143] copyHostCerts
	I0815 17:21:15.468916  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem
	I0815 17:21:15.468949  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem, removing ...
	I0815 17:21:15.468959  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem
	I0815 17:21:15.469022  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem (1078 bytes)
	I0815 17:21:15.469101  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem
	I0815 17:21:15.469120  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem, removing ...
	I0815 17:21:15.469126  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem
	I0815 17:21:15.469153  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem (1123 bytes)
	I0815 17:21:15.469254  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem
	I0815 17:21:15.469291  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem, removing ...
	I0815 17:21:15.469298  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem
	I0815 17:21:15.469319  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem (1675 bytes)
	I0815 17:21:15.469376  454315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem org=jenkins.ha-896691-m02 san=[127.0.0.1 192.168.49.3 ha-896691-m02 localhost minikube]
	I0815 17:21:15.578951  454315 provision.go:177] copyRemoteCerts
	I0815 17:21:15.579010  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:21:15.579048  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m02
	I0815 17:21:15.595877  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m02/id_rsa Username:docker}
	I0815 17:21:15.689101  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:21:15.689167  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 17:21:15.710883  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:21:15.710950  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:21:15.732009  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:21:15.732070  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:21:15.753220  454315 provision.go:87] duration metric: took 300.680902ms to configureAuth
	I0815 17:21:15.753252  454315 ubuntu.go:193] setting minikube options for container-runtime
	I0815 17:21:15.753502  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:21:15.753634  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m02
	I0815 17:21:15.770206  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:21:15.770419  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0815 17:21:15.770440  454315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:21:16.098453  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:21:16.098482  454315 machine.go:96] duration metric: took 4.109253635s to provisionDockerMachine
	I0815 17:21:16.098496  454315 start.go:293] postStartSetup for "ha-896691-m02" (driver="docker")
	I0815 17:21:16.098510  454315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:21:16.098586  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:21:16.098635  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m02
	I0815 17:21:16.115626  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m02/id_rsa Username:docker}
	I0815 17:21:16.209004  454315 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:21:16.211921  454315 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 17:21:16.211947  454315 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 17:21:16.211955  454315 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 17:21:16.211964  454315 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 17:21:16.211973  454315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/addons for local assets ...
	I0815 17:21:16.212019  454315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/files for local assets ...
	I0815 17:21:16.212088  454315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> 3840912.pem in /etc/ssl/certs
	I0815 17:21:16.212098  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> /etc/ssl/certs/3840912.pem
	I0815 17:21:16.212175  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:21:16.219753  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem --> /etc/ssl/certs/3840912.pem (1708 bytes)
	I0815 17:21:16.240494  454315 start.go:296] duration metric: took 141.980933ms for postStartSetup
	I0815 17:21:16.240600  454315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:21:16.240644  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m02
	I0815 17:21:16.257179  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m02/id_rsa Username:docker}
	I0815 17:21:16.349301  454315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 17:21:16.353359  454315 fix.go:56] duration metric: took 4.670326437s for fixHost
	I0815 17:21:16.353382  454315 start.go:83] releasing machines lock for "ha-896691-m02", held for 4.670373492s
	I0815 17:21:16.353435  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m02
	I0815 17:21:16.372203  454315 out.go:177] * Found network options:
	I0815 17:21:16.373608  454315 out.go:177]   - NO_PROXY=192.168.49.2
	W0815 17:21:16.374788  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:21:16.374816  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:21:16.374878  454315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:21:16.374912  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m02
	I0815 17:21:16.374973  454315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:21:16.375026  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m02
	I0815 17:21:16.391061  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m02/id_rsa Username:docker}
	I0815 17:21:16.391933  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m02/id_rsa Username:docker}
	I0815 17:21:16.628604  454315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 17:21:16.635293  454315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:21:16.664698  454315 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 17:21:16.664788  454315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:21:16.679415  454315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 17:21:16.679449  454315 start.go:495] detecting cgroup driver to use...
	I0815 17:21:16.679491  454315 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 17:21:16.679545  454315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:21:16.761793  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:21:16.777461  454315 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:21:16.777513  454315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:21:16.858822  454315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:21:16.873515  454315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:21:17.194229  454315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:21:17.460201  454315 docker.go:233] disabling docker service ...
	I0815 17:21:17.460271  454315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:21:17.472164  454315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:21:17.482144  454315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:21:17.758147  454315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:21:18.068170  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:21:18.083402  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:21:18.170254  454315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:21:18.170315  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:18.184596  454315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:21:18.184661  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:18.256868  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:18.269424  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:18.285955  454315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:21:18.301417  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:18.364440  454315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:18.377460  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:21:18.391594  454315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:21:18.455547  454315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:21:18.467164  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:21:18.780962  454315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:21:20.326151  454315 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.545153452s)
	I0815 17:21:20.326182  454315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:21:20.326223  454315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:21:20.329686  454315 start.go:563] Will wait 60s for crictl version
	I0815 17:21:20.329739  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:21:20.332788  454315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:21:20.379821  454315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 17:21:20.379909  454315 ssh_runner.go:195] Run: crio --version
	I0815 17:21:20.414396  454315 ssh_runner.go:195] Run: crio --version
	I0815 17:21:20.451351  454315 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 17:21:20.453026  454315 out.go:177]   - env NO_PROXY=192.168.49.2
	I0815 17:21:20.454726  454315 cli_runner.go:164] Run: docker network inspect ha-896691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:21:20.472022  454315 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 17:21:20.475600  454315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:21:20.485707  454315 mustload.go:65] Loading cluster: ha-896691
	I0815 17:21:20.485919  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:21:20.486154  454315 cli_runner.go:164] Run: docker container inspect ha-896691 --format={{.State.Status}}
	I0815 17:21:20.502781  454315 host.go:66] Checking if "ha-896691" exists ...
	I0815 17:21:20.503064  454315 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691 for IP: 192.168.49.3
	I0815 17:21:20.503078  454315 certs.go:194] generating shared ca certs ...
	I0815 17:21:20.503103  454315 certs.go:226] acquiring lock for ca certs: {Name:mkf196aaefcb61003123eeb327e0f1a70bf4bfe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:21:20.503238  454315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key
	I0815 17:21:20.503294  454315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key
	I0815 17:21:20.503307  454315 certs.go:256] generating profile certs ...
	I0815 17:21:20.503392  454315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.key
	I0815 17:21:20.503470  454315 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key.9e0d1a8b
	I0815 17:21:20.503534  454315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.key
	I0815 17:21:20.503550  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:21:20.503571  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:21:20.503590  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:21:20.503609  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:21:20.503635  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:21:20.503655  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:21:20.503672  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:21:20.503689  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:21:20.503759  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem (1338 bytes)
	W0815 17:21:20.503805  454315 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091_empty.pem, impossibly tiny 0 bytes
	I0815 17:21:20.503824  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 17:21:20.503861  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem (1078 bytes)
	I0815 17:21:20.503893  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:21:20.503925  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem (1675 bytes)
	I0815 17:21:20.503979  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem (1708 bytes)
	I0815 17:21:20.504020  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> /usr/share/ca-certificates/3840912.pem
	I0815 17:21:20.504040  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:21:20.504058  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem -> /usr/share/ca-certificates/384091.pem
	I0815 17:21:20.504127  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:21:20.520244  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691/id_rsa Username:docker}
	I0815 17:21:20.608861  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 17:21:20.612411  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 17:21:20.623950  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 17:21:20.626975  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 17:21:20.638311  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 17:21:20.641260  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 17:21:20.652256  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 17:21:20.655304  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 17:21:20.666462  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 17:21:20.670330  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 17:21:20.684100  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 17:21:20.687966  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0815 17:21:20.700228  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:21:20.725772  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:21:20.748915  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:21:20.770358  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 17:21:20.791450  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 17:21:20.812474  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 17:21:20.833606  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:21:20.854439  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:21:20.875134  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem --> /usr/share/ca-certificates/3840912.pem (1708 bytes)
	I0815 17:21:20.896065  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:21:20.916712  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem --> /usr/share/ca-certificates/384091.pem (1338 bytes)
	I0815 17:21:20.937093  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 17:21:20.952684  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 17:21:20.968192  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 17:21:20.983697  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 17:21:20.999335  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 17:21:21.014992  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0815 17:21:21.030451  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 17:21:21.046206  454315 ssh_runner.go:195] Run: openssl version
	I0815 17:21:21.051198  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3840912.pem && ln -fs /usr/share/ca-certificates/3840912.pem /etc/ssl/certs/3840912.pem"
	I0815 17:21:21.060163  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3840912.pem
	I0815 17:21:21.064218  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:14 /usr/share/ca-certificates/3840912.pem
	I0815 17:21:21.064272  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3840912.pem
	I0815 17:21:21.071057  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3840912.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:21:21.079414  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:21:21.088427  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:21:21.091660  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:21:21.091714  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:21:21.097998  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:21:21.106550  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384091.pem && ln -fs /usr/share/ca-certificates/384091.pem /etc/ssl/certs/384091.pem"
	I0815 17:21:21.115388  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384091.pem
	I0815 17:21:21.118743  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:14 /usr/share/ca-certificates/384091.pem
	I0815 17:21:21.118798  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384091.pem
	I0815 17:21:21.125196  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384091.pem /etc/ssl/certs/51391683.0"
	I0815 17:21:21.133643  454315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:21:21.137381  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 17:21:21.143547  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 17:21:21.150122  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 17:21:21.156695  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 17:21:21.163091  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 17:21:21.169354  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 17:21:21.175376  454315 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.0 crio true true} ...
	I0815 17:21:21.175490  454315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-896691-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-896691 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:21:21.175518  454315 kube-vip.go:115] generating kube-vip config ...
	I0815 17:21:21.175553  454315 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0815 17:21:21.188013  454315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:21:21.188093  454315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:21:21.188143  454315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:21:21.196028  454315 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:21:21.196099  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 17:21:21.204276  454315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 17:21:21.220594  454315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:21:21.236250  454315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 17:21:21.253820  454315 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:21:21.257176  454315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:21:21.267166  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:21:21.362027  454315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:21:21.372743  454315 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:21:21.372975  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:21:21.375276  454315 out.go:177] * Verifying Kubernetes components...
	I0815 17:21:21.376644  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:21:21.462491  454315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:21:21.473748  454315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:21:21.474018  454315 kapi.go:59] client config for ha-896691: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.key", CAFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 17:21:21.474118  454315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0815 17:21:21.474326  454315 node_ready.go:35] waiting up to 6m0s for node "ha-896691-m02" to be "Ready" ...
	I0815 17:21:21.474425  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:21:21.474433  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:21.474440  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:21.474444  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:32.982998  454315 round_trippers.go:574] Response Status: 500 Internal Server Error in 11508 milliseconds
	I0815 17:21:32.983346  454315 node_ready.go:53] error getting node "ha-896691-m02": etcdserver: request timed out
	I0815 17:21:32.983438  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:21:32.983450  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:32.983461  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:32.983469  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:37.825399  454315 round_trippers.go:574] Response Status: 200 OK in 4841 milliseconds
	I0815 17:21:37.826768  454315 node_ready.go:49] node "ha-896691-m02" has status "Ready":"True"
	I0815 17:21:37.826800  454315 node_ready.go:38] duration metric: took 16.352445208s for node "ha-896691-m02" to be "Ready" ...
	I0815 17:21:37.826815  454315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:21:37.826881  454315 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 17:21:37.826892  454315 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 17:21:37.826952  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:21:37.826957  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:37.826963  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:37.826966  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:37.834937  454315 round_trippers.go:574] Response Status: 429 Too Many Requests in 7 milliseconds
	I0815 17:21:38.835914  454315 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:21:38.835976  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:21:38.835983  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:38.835994  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:38.836003  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:38.842868  454315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 17:21:38.855147  454315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lmnsh" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:38.855297  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-lmnsh
	I0815 17:21:38.855309  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:38.855321  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:38.855335  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:38.857265  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:38.857978  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:38.857993  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:38.858003  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:38.858014  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:38.859985  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:38.860500  454315 pod_ready.go:93] pod "coredns-6f6b679f8f-lmnsh" in "kube-system" namespace has status "Ready":"True"
	I0815 17:21:38.860517  454315 pod_ready.go:82] duration metric: took 5.342241ms for pod "coredns-6f6b679f8f-lmnsh" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:38.860526  454315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-w6rw2" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:38.860608  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-w6rw2
	I0815 17:21:38.860621  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:38.860628  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:38.860636  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:38.862420  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:38.863053  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:38.863072  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:38.863083  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:38.863090  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:38.864972  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:38.865496  454315 pod_ready.go:93] pod "coredns-6f6b679f8f-w6rw2" in "kube-system" namespace has status "Ready":"True"
	I0815 17:21:38.865516  454315 pod_ready.go:82] duration metric: took 4.98446ms for pod "coredns-6f6b679f8f-w6rw2" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:38.865525  454315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:38.865572  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691
	I0815 17:21:38.865579  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:38.865585  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:38.865589  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:38.867282  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:38.867841  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:38.867856  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:38.867866  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:38.867871  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:38.869653  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:38.870092  454315 pod_ready.go:93] pod "etcd-ha-896691" in "kube-system" namespace has status "Ready":"True"
	I0815 17:21:38.870109  454315 pod_ready.go:82] duration metric: took 4.578541ms for pod "etcd-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:38.870116  454315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:38.870160  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m02
	I0815 17:21:38.870168  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:38.870174  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:38.870179  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:38.871937  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:38.872538  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:21:38.872576  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:38.872585  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:38.872593  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:38.874357  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:38.874851  454315 pod_ready.go:93] pod "etcd-ha-896691-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:21:38.874866  454315 pod_ready.go:82] duration metric: took 4.744799ms for pod "etcd-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:38.874875  454315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:38.874920  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:21:38.874928  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:38.874935  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:38.874940  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:38.876945  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:39.036820  454315 request.go:632] Waited for 159.313771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:21:39.036882  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:21:39.036887  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:39.036895  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:39.036902  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:39.038992  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:39.039494  454315 pod_ready.go:93] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:21:39.039515  454315 pod_ready.go:82] duration metric: took 164.633707ms for pod "etcd-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:39.039533  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:21:39.236957  454315 request.go:632] Waited for 197.356969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:39.237029  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:39.237067  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:39.237078  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:39.237090  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:39.239234  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:39.436451  454315 request.go:632] Waited for 196.363162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:39.436515  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:39.436536  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:39.436572  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:39.436584  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:39.439097  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:39.635925  454315 request.go:632] Waited for 95.203194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:39.635995  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:39.636009  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:39.636021  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:39.636033  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:39.653694  454315 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0815 17:21:39.836136  454315 request.go:632] Waited for 181.424038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:39.836191  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:39.836196  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:39.836237  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:39.836244  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:39.856360  454315 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0815 17:21:40.039831  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:40.039855  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:40.039866  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:40.039874  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:40.042916  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:21:40.235915  454315 request.go:632] Waited for 192.283805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:40.235980  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:40.235985  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:40.235993  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:40.235999  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:40.254139  454315 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0815 17:21:40.539808  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:40.539834  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:40.539842  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:40.539846  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:40.542620  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:40.636582  454315 request.go:632] Waited for 93.241827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:40.636641  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:40.636646  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:40.636653  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:40.636656  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:40.639258  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:41.039919  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:41.039941  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:41.039950  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:41.039954  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:41.042670  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:41.043245  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:41.043260  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:41.043266  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:41.043270  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:41.045675  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:41.046079  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:21:41.540478  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:41.540497  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:41.540506  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:41.540509  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:41.543007  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:41.543885  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:41.543904  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:41.543913  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:41.543919  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:41.546043  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:42.039971  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:42.039991  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:42.039999  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:42.040002  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:42.042359  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:42.043035  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:42.043050  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:42.043058  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:42.043065  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:42.045138  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:42.539780  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:42.539801  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:42.539809  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:42.539814  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:42.542184  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:42.542931  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:42.542948  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:42.542958  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:42.542964  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:42.545209  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:43.039899  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:43.039923  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:43.039934  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:43.039941  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:43.042393  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:43.043132  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:43.043147  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:43.043155  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:43.043158  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:43.045153  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:43.539993  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:43.540013  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:43.540020  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:43.540026  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:43.542620  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:43.543302  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:43.543318  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:43.543325  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:43.543329  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:43.545386  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:43.545750  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:21:44.039805  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:44.039831  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:44.039842  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:44.039847  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:44.042487  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:44.043312  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:44.043334  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:44.043344  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:44.043349  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:44.047364  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:21:44.540127  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:44.540153  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:44.540162  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:44.540169  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:44.542729  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:44.543387  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:44.543403  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:44.543410  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:44.543414  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:44.545434  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:45.040593  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:45.040614  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:45.040622  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:45.040663  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:45.043428  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:45.044082  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:45.044098  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:45.044107  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:45.044114  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:45.046217  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:45.540074  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:45.540096  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:45.540104  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:45.540109  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:45.542815  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:45.543476  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:45.543492  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:45.543499  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:45.543503  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:45.545567  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:45.546040  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:21:46.040376  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:46.040396  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:46.040404  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:46.040408  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:46.043125  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:46.043877  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:46.043894  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:46.043901  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:46.043905  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:46.046422  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:46.540166  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:46.540187  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:46.540195  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:46.540200  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:46.542795  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:46.543444  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:46.543459  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:46.543466  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:46.543471  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:46.545619  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:47.040506  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:47.040527  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:47.040533  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:47.040538  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:47.043242  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:47.043889  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:47.043904  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:47.043911  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:47.043917  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:47.046046  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:47.539829  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:47.539861  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:47.539870  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:47.539873  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:47.542646  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:47.543404  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:47.543418  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:47.543425  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:47.543430  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:47.545533  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:48.039885  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:48.039905  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:48.039913  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:48.039916  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:48.042986  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:21:48.043864  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:48.043889  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:48.043900  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:48.043906  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:48.045703  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:48.046129  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:21:48.540498  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:48.540518  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:48.540526  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:48.540531  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:48.543118  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:48.543790  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:48.543805  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:48.543812  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:48.543816  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:48.545774  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:49.040625  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:49.040651  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:49.040663  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:49.040669  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:49.043114  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:49.043769  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:49.043785  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:49.043792  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:49.043796  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:49.045915  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:49.540731  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:49.540755  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:49.540768  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:49.540773  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:49.543203  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:49.543791  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:49.543806  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:49.543815  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:49.543824  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:49.545939  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:50.039903  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:50.039927  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:50.039936  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:50.039940  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:50.042516  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:50.043192  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:50.043209  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:50.043215  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:50.043218  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:50.045255  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:50.540044  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:50.540077  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:50.540090  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:50.540095  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:50.542790  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:50.543397  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:50.543415  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:50.543425  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:50.543431  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:50.545709  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:50.546185  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:21:51.040159  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:51.040238  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:51.040260  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:51.040274  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:51.043331  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:21:51.044098  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:51.044116  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:51.044127  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:51.044134  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:51.045948  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:51.540707  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:51.540727  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:51.540735  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:51.540740  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:51.543238  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:51.543928  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:51.543942  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:51.543950  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:51.543955  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:51.545886  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:52.039807  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:52.039826  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:52.039833  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:52.039838  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:52.042457  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:52.043184  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:52.043201  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:52.043208  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:52.043212  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:52.045361  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:52.540158  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:52.540183  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:52.540194  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:52.540199  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:52.542738  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:52.543483  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:52.543499  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:52.543507  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:52.543511  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:52.545561  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:53.039918  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:53.039940  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:53.039955  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:53.039962  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:53.044319  454315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:21:53.044978  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:53.044994  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:53.045003  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:53.045010  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:53.046834  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:53.047274  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:21:53.540879  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:53.540910  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:53.540921  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:53.540925  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:53.543687  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:53.544487  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:53.544507  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:53.544518  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:53.544526  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:53.546571  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:54.040413  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:54.040432  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:54.040441  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:54.040446  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:54.042779  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:54.043416  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:54.043431  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:54.043438  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:54.043444  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:54.045374  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:54.540165  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:54.540192  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:54.540203  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:54.540210  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:54.542691  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:54.543336  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:54.543349  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:54.543357  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:54.543362  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:54.545401  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:55.040348  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:55.040370  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:55.040379  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:55.040383  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:55.043137  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:55.043741  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:55.043756  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:55.043764  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:55.043767  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:55.046022  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:55.539798  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:55.539830  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:55.539842  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:55.539846  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:55.542452  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:55.543069  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:55.543086  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:55.543093  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:55.543097  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:55.545188  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:55.545703  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:21:56.039918  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:56.039936  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:56.039944  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:56.039948  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:56.042474  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:56.043155  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:56.043170  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:56.043177  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:56.043183  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:56.045281  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:56.540114  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:56.540136  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:56.540145  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:56.540148  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:56.543116  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:56.543777  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:56.543792  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:56.543800  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:56.543803  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:56.545891  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:57.039714  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:57.039735  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:57.039744  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:57.039748  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:57.042575  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:57.043282  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:57.043298  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:57.043308  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:57.043313  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:57.045359  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:57.540115  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:57.540136  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:57.540144  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:57.540148  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:57.542851  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:57.543484  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:57.543502  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:57.543509  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:57.543514  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:57.545518  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:57.545940  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:21:58.040383  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:58.040405  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:58.040413  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:58.040416  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:58.043026  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:58.043798  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:58.043819  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:58.043829  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:58.043833  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:58.045855  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:58.540666  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:58.540693  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:58.540701  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:58.540705  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:58.543368  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:58.544138  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:58.544152  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:58.544159  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:58.544164  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:58.546398  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:59.040201  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:59.040221  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:59.040229  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:59.040234  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:59.042881  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:59.043524  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:59.043539  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:59.043547  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:59.043552  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:59.045546  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:59.540432  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:21:59.540453  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:59.540461  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:59.540465  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:59.542962  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:21:59.543668  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:21:59.543684  454315 round_trippers.go:469] Request Headers:
	I0815 17:21:59.543691  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:21:59.543694  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:21:59.545623  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:21:59.546068  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:00.040354  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:00.040374  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:00.040383  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:00.040388  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:00.043101  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:00.043745  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:00.043760  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:00.043768  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:00.043772  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:00.045817  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:00.540677  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:00.540699  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:00.540706  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:00.540710  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:00.543286  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:00.543921  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:00.543937  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:00.543945  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:00.543949  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:00.546019  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:01.039800  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:01.039820  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:01.039828  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:01.039833  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:01.042509  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:01.043209  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:01.043226  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:01.043233  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:01.043236  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:01.045331  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:01.539862  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:01.539884  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:01.539893  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:01.539899  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:01.542535  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:01.543302  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:01.543320  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:01.543332  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:01.543342  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:01.545501  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:02.040470  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:02.040490  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:02.040498  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:02.040504  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:02.043152  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:02.043957  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:02.043978  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:02.043988  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:02.043996  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:02.046175  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:02.046716  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:02.540001  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:02.540022  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:02.540031  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:02.540035  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:02.542734  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:02.543393  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:02.543409  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:02.543416  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:02.543420  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:02.545327  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:03.040092  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:03.040113  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:03.040122  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:03.040128  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:03.042828  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:03.043467  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:03.043483  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:03.043496  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:03.043500  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:03.045642  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:03.540503  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:03.540524  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:03.540531  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:03.540535  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:03.543194  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:03.543884  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:03.543906  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:03.543914  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:03.543918  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:03.545894  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:04.040734  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:04.040756  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:04.040764  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:04.040769  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:04.043111  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:04.043802  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:04.043818  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:04.043825  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:04.043829  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:04.045824  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:04.540673  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:04.540693  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:04.540701  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:04.540705  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:04.543196  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:04.543889  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:04.543905  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:04.543913  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:04.543918  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:04.546021  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:04.546480  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:05.040139  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:05.040159  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:05.040167  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:05.040171  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:05.042606  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:05.043329  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:05.043348  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:05.043358  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:05.043364  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:05.045352  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:05.540058  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:05.540077  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:05.540085  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:05.540089  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:05.542686  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:05.543326  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:05.543342  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:05.543350  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:05.543354  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:05.545479  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:06.040170  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:06.040190  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:06.040199  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:06.040202  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:06.042719  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:06.043368  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:06.043385  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:06.043390  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:06.043394  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:06.045402  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:06.540226  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:06.540251  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:06.540263  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:06.540270  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:06.542931  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:06.543626  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:06.543643  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:06.543651  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:06.543655  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:06.545771  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:07.040712  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:07.040733  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:07.040742  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:07.040746  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:07.043268  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:07.044095  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:07.044113  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:07.044122  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:07.044129  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:07.046257  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:07.046719  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:07.540035  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:07.540056  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:07.540065  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:07.540069  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:07.542762  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:07.544160  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:07.544186  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:07.544198  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:07.544205  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:07.546431  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:08.040210  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:08.040232  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:08.040240  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:08.040245  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:08.042971  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:08.043631  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:08.043647  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:08.043654  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:08.043658  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:08.045806  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:08.540648  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:08.540670  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:08.540678  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:08.540683  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:08.543341  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:08.543996  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:08.544012  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:08.544019  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:08.544023  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:08.546129  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:09.039840  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:09.039862  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:09.039871  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:09.039876  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:09.042568  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:09.043453  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:09.043473  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:09.043483  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:09.043488  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:09.045507  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:09.540435  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:09.540456  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:09.540465  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:09.540470  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:09.543047  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:09.543671  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:09.543687  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:09.543695  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:09.543699  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:09.545959  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:09.546399  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:10.039823  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:10.039843  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:10.039851  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:10.039855  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:10.042554  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:10.043181  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:10.043196  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:10.043203  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:10.043210  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:10.045474  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:10.540377  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:10.540397  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:10.540405  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:10.540410  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:10.542983  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:10.543695  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:10.543719  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:10.543727  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:10.543730  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:10.547761  454315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:22:11.040590  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:11.040612  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:11.040620  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:11.040624  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:11.043130  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:11.044032  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:11.044050  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:11.044060  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:11.044064  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:11.046113  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:11.540289  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:11.540309  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:11.540317  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:11.540320  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:11.542733  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:11.543401  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:11.543417  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:11.543424  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:11.543429  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:11.545474  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:12.040693  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:12.040717  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:12.040728  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:12.040735  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:12.043294  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:12.043898  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:12.043914  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:12.043922  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:12.043925  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:12.046006  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:12.046458  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:12.539813  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:12.539832  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:12.539841  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:12.539845  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:12.542262  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:12.542909  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:12.542925  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:12.542933  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:12.542938  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:12.544948  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:13.039725  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:13.039748  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:13.039759  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:13.039765  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:13.042402  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:13.043079  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:13.043097  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:13.043104  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:13.043107  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:13.045069  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:13.539820  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:13.539841  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:13.539849  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:13.539855  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:13.542405  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:13.543203  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:13.543220  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:13.543231  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:13.543236  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:13.545503  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:14.040348  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:14.040370  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:14.040381  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:14.040387  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:14.042921  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:14.043535  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:14.043550  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:14.043557  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:14.043561  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:14.045630  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:14.540474  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:14.540493  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:14.540501  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:14.540505  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:14.542828  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:14.543500  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:14.543514  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:14.543522  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:14.543525  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:14.545502  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:14.545945  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:15.040696  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:15.040720  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:15.040731  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:15.040735  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:15.043553  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:15.044199  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:15.044213  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:15.044221  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:15.044226  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:15.046429  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:15.540196  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:15.540215  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:15.540225  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:15.540229  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:15.542571  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:15.543166  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:15.543183  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:15.543191  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:15.543196  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:15.545081  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:16.039828  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:16.039847  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:16.039855  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:16.039859  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:16.042328  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:16.042993  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:16.043008  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:16.043015  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:16.043020  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:16.045035  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:16.539810  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:16.539828  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:16.539836  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:16.539840  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:16.542035  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:16.542620  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:16.542635  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:16.542642  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:16.542650  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:16.544430  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:17.040175  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:17.040198  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:17.040204  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:17.040215  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:17.042881  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:17.043523  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:17.043538  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:17.043546  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:17.043551  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:17.045478  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:17.045978  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:17.540367  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:17.540392  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:17.540406  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:17.540410  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:17.542788  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:17.543438  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:17.543455  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:17.543462  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:17.543465  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:17.545301  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:18.040057  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:18.040082  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:18.040093  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:18.040100  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:18.042816  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:18.043494  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:18.043510  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:18.043518  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:18.043524  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:18.047455  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:22:18.540302  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:18.540329  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:18.540339  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:18.540346  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:18.543089  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:18.543808  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:18.543824  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:18.543832  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:18.543835  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:18.546072  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:19.039845  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:19.039868  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:19.039877  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:19.039882  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:19.042738  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:19.043469  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:19.043485  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:19.043495  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:19.043502  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:19.045674  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:19.046088  454315 pod_ready.go:103] pod "kube-apiserver-ha-896691" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:19.540470  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:19.540490  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:19.540497  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:19.540501  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:19.542775  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:19.543428  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:19.543451  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:19.543458  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:19.543461  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:19.545402  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:20.040260  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:20.040281  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.040292  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.040298  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.042829  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:20.043469  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:20.043485  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.043492  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.043499  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.045516  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:20.540578  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:22:20.540600  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.540608  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.540612  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.540918  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.541013  454315 pod_ready.go:98] error getting pod "kube-apiserver-ha-896691" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.541037  454315 pod_ready.go:82] duration metric: took 41.501495846s for pod "kube-apiserver-ha-896691" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.541056  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-ha-896691" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.541067  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.541134  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m02
	I0815 17:22:20.541144  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.541153  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.541161  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.541389  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.541469  454315 pod_ready.go:98] error getting pod "kube-apiserver-ha-896691-m02" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.541487  454315 pod_ready.go:82] duration metric: took 407.394µs for pod "kube-apiserver-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.541500  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-ha-896691-m02" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.541511  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.541581  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m03
	I0815 17:22:20.541589  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.541599  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.541607  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.541766  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.541803  454315 pod_ready.go:98] error getting pod "kube-apiserver-ha-896691-m03" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m03": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.541817  454315 pod_ready.go:82] duration metric: took 297.857µs for pod "kube-apiserver-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.541828  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-ha-896691-m03" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m03": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.541834  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.541878  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691
	I0815 17:22:20.541884  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.541890  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.541896  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.542034  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.542103  454315 pod_ready.go:98] error getting pod "kube-controller-manager-ha-896691" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.542118  454315 pod_ready.go:82] duration metric: took 275.183µs for pod "kube-controller-manager-ha-896691" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.542132  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-896691" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.542140  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.542206  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m02
	I0815 17:22:20.542217  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.542224  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.542230  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.542397  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.542437  454315 pod_ready.go:98] error getting pod "kube-controller-manager-ha-896691-m02" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.542451  454315 pod_ready.go:82] duration metric: took 301.444µs for pod "kube-controller-manager-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.542464  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-896691-m02" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.542472  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.542516  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m03
	I0815 17:22:20.542525  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.542535  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.542543  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.542700  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.542739  454315 pod_ready.go:98] error getting pod "kube-controller-manager-ha-896691-m03" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m03": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.542749  454315 pod_ready.go:82] duration metric: took 266.219µs for pod "kube-controller-manager-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.542758  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-896691-m03" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m03": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.542764  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-74b2m" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.542812  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-74b2m
	I0815 17:22:20.542819  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.542825  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.542828  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.542980  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.543035  454315 pod_ready.go:98] error getting pod "kube-proxy-74b2m" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-74b2m": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.543052  454315 pod_ready.go:82] duration metric: took 278.883µs for pod "kube-proxy-74b2m" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.543067  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-proxy-74b2m" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-74b2m": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.543080  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9m9tc" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.543138  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9m9tc
	I0815 17:22:20.543148  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.543158  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.543169  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.543368  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.543421  454315 pod_ready.go:98] error getting pod "kube-proxy-9m9tc" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9m9tc": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.543438  454315 pod_ready.go:82] duration metric: took 347.011µs for pod "kube-proxy-9m9tc" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.543451  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-proxy-9m9tc" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9m9tc": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.543463  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g4qhb" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.543516  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb
	I0815 17:22:20.543527  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.543536  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.543545  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.543759  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.543801  454315 pod_ready.go:98] error getting pod "kube-proxy-g4qhb" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.543821  454315 pod_ready.go:82] duration metric: took 344.705µs for pod "kube-proxy-g4qhb" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.543830  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-proxy-g4qhb" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.543836  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z4mvj" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.543876  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z4mvj
	I0815 17:22:20.543883  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.543888  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.543893  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.544030  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.544080  454315 pod_ready.go:98] error getting pod "kube-proxy-z4mvj" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z4mvj": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.544095  454315 pod_ready.go:82] duration metric: took 248.768µs for pod "kube-proxy-z4mvj" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.544108  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-proxy-z4mvj" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z4mvj": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.544120  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.741527  454315 request.go:632] Waited for 197.337334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691
	I0815 17:22:20.741587  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691
	I0815 17:22:20.741594  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.741604  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.741610  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.741909  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.741999  454315 pod_ready.go:98] error getting pod "kube-scheduler-ha-896691" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.742020  454315 pod_ready.go:82] duration metric: took 197.888115ms for pod "kube-scheduler-ha-896691" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.742042  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-896691" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.742057  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:20.941478  454315 request.go:632] Waited for 199.336186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m02
	I0815 17:22:20.941533  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m02
	I0815 17:22:20.941538  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:20.941545  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:20.941552  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:20.941821  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:20.941892  454315 pod_ready.go:98] error getting pod "kube-scheduler-ha-896691-m02" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.941907  454315 pod_ready.go:82] duration metric: took 199.839481ms for pod "kube-scheduler-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:20.941922  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-896691-m02" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:20.941931  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:21.141210  454315 request.go:632] Waited for 199.202742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m03
	I0815 17:22:21.141271  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m03
	I0815 17:22:21.141276  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:21.141284  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:21.141298  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:21.141626  454315 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0815 17:22:21.141698  454315 pod_ready.go:98] error getting pod "kube-scheduler-ha-896691-m03" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m03": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:21.141716  454315 pod_ready.go:82] duration metric: took 199.772059ms for pod "kube-scheduler-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	E0815 17:22:21.141731  454315 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-896691-m03" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m03": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:21.141742  454315 pod_ready.go:39] duration metric: took 43.314912721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:22:21.141761  454315 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:22:21.141813  454315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:22:21.152594  454315 api_server.go:72] duration metric: took 59.779785596s to wait for apiserver process to appear ...
	I0815 17:22:21.152621  454315 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:22:21.152639  454315 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 17:22:21.152969  454315 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0815 17:22:21.652768  454315 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:22:21.652857  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:22:21.687864  454315 cri.go:89] found id: "be964e8ce3d999438106338414e7dc4ae8dca6b53c490c9d1c8303ffeffe0b0e"
	I0815 17:22:21.687885  454315 cri.go:89] found id: "47ae959b2e7c1facff1ac79ebef1e724a755f4b73379ad829b1709afe04694f4"
	I0815 17:22:21.687889  454315 cri.go:89] found id: ""
	I0815 17:22:21.687897  454315 logs.go:276] 2 containers: [be964e8ce3d999438106338414e7dc4ae8dca6b53c490c9d1c8303ffeffe0b0e 47ae959b2e7c1facff1ac79ebef1e724a755f4b73379ad829b1709afe04694f4]
	I0815 17:22:21.687940  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:21.691390  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:21.694654  454315 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 17:22:21.694719  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:22:21.729142  454315 cri.go:89] found id: "829f2733fb289f48baf40f9c4fb5b0035383064f89df2e9022e91d641dfbd1c9"
	I0815 17:22:21.729164  454315 cri.go:89] found id: "60e325f91056711a4513bef2c23702d9ddc7dfce8ccd9867256b9475e1c4ba53"
	I0815 17:22:21.729168  454315 cri.go:89] found id: ""
	I0815 17:22:21.729178  454315 logs.go:276] 2 containers: [829f2733fb289f48baf40f9c4fb5b0035383064f89df2e9022e91d641dfbd1c9 60e325f91056711a4513bef2c23702d9ddc7dfce8ccd9867256b9475e1c4ba53]
	I0815 17:22:21.729229  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:21.732606  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:21.735581  454315 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 17:22:21.735637  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:22:21.768624  454315 cri.go:89] found id: ""
	I0815 17:22:21.768654  454315 logs.go:276] 0 containers: []
	W0815 17:22:21.768667  454315 logs.go:278] No container was found matching "coredns"
	I0815 17:22:21.768677  454315 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:22:21.768726  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:22:21.803076  454315 cri.go:89] found id: "abb82ec13d568ed5a831df402d591035cdf0cd3b32fa2950e5f7bc6a84349200"
	I0815 17:22:21.803098  454315 cri.go:89] found id: "e8981a415b72c23c61bdd29c13d49ea2995b03aa4a269423d4ead8429b5af60b"
	I0815 17:22:21.803101  454315 cri.go:89] found id: ""
	I0815 17:22:21.803108  454315 logs.go:276] 2 containers: [abb82ec13d568ed5a831df402d591035cdf0cd3b32fa2950e5f7bc6a84349200 e8981a415b72c23c61bdd29c13d49ea2995b03aa4a269423d4ead8429b5af60b]
	I0815 17:22:21.803153  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:21.806741  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:21.809776  454315 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:22:21.809842  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:22:21.842269  454315 cri.go:89] found id: "9204d53785c7b7f5e97f61f1c81a8f63fd48a25e2b7d7da49ada6fefaf9cddee"
	I0815 17:22:21.842300  454315 cri.go:89] found id: ""
	I0815 17:22:21.842308  454315 logs.go:276] 1 containers: [9204d53785c7b7f5e97f61f1c81a8f63fd48a25e2b7d7da49ada6fefaf9cddee]
	I0815 17:22:21.842355  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:21.845642  454315 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:22:21.845701  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:22:21.883709  454315 cri.go:89] found id: "dad8cf1fe916b5a81431472e3da03e6227182661b3f6f6346275ea0763d731fe"
	I0815 17:22:21.883731  454315 cri.go:89] found id: "bd602fbe8737f2c5f07fdc2cc9e56d63cb4d355deacb09efb04f205dc99c4007"
	I0815 17:22:21.883737  454315 cri.go:89] found id: ""
	I0815 17:22:21.883746  454315 logs.go:276] 2 containers: [dad8cf1fe916b5a81431472e3da03e6227182661b3f6f6346275ea0763d731fe bd602fbe8737f2c5f07fdc2cc9e56d63cb4d355deacb09efb04f205dc99c4007]
	I0815 17:22:21.883811  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:21.887459  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:21.890887  454315 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 17:22:21.890955  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:22:21.928906  454315 cri.go:89] found id: "fc56d2dd5cf8ec051279c9c74eabbe0e58d2696b8155224ca0d8af8148db88a6"
	I0815 17:22:21.928930  454315 cri.go:89] found id: ""
	I0815 17:22:21.928941  454315 logs.go:276] 1 containers: [fc56d2dd5cf8ec051279c9c74eabbe0e58d2696b8155224ca0d8af8148db88a6]
	I0815 17:22:21.929000  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:21.932499  454315 logs.go:123] Gathering logs for dmesg ...
	I0815 17:22:21.932524  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:22:21.961164  454315 logs.go:123] Gathering logs for etcd [829f2733fb289f48baf40f9c4fb5b0035383064f89df2e9022e91d641dfbd1c9] ...
	I0815 17:22:21.961195  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 829f2733fb289f48baf40f9c4fb5b0035383064f89df2e9022e91d641dfbd1c9"
	I0815 17:22:22.012666  454315 logs.go:123] Gathering logs for kube-scheduler [e8981a415b72c23c61bdd29c13d49ea2995b03aa4a269423d4ead8429b5af60b] ...
	I0815 17:22:22.012704  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8981a415b72c23c61bdd29c13d49ea2995b03aa4a269423d4ead8429b5af60b"
	I0815 17:22:22.052106  454315 logs.go:123] Gathering logs for kube-controller-manager [dad8cf1fe916b5a81431472e3da03e6227182661b3f6f6346275ea0763d731fe] ...
	I0815 17:22:22.052137  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dad8cf1fe916b5a81431472e3da03e6227182661b3f6f6346275ea0763d731fe"
	I0815 17:22:22.106644  454315 logs.go:123] Gathering logs for kubelet ...
	I0815 17:22:22.106681  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:22:22.176395  454315 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:22:22.176441  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:22:22.432010  454315 logs.go:123] Gathering logs for kube-apiserver [be964e8ce3d999438106338414e7dc4ae8dca6b53c490c9d1c8303ffeffe0b0e] ...
	I0815 17:22:22.432043  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be964e8ce3d999438106338414e7dc4ae8dca6b53c490c9d1c8303ffeffe0b0e"
	I0815 17:22:22.483497  454315 logs.go:123] Gathering logs for kindnet [fc56d2dd5cf8ec051279c9c74eabbe0e58d2696b8155224ca0d8af8148db88a6] ...
	I0815 17:22:22.483531  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc56d2dd5cf8ec051279c9c74eabbe0e58d2696b8155224ca0d8af8148db88a6"
	I0815 17:22:22.521185  454315 logs.go:123] Gathering logs for CRI-O ...
	I0815 17:22:22.521219  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 17:22:22.597921  454315 logs.go:123] Gathering logs for kube-apiserver [47ae959b2e7c1facff1ac79ebef1e724a755f4b73379ad829b1709afe04694f4] ...
	I0815 17:22:22.597965  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47ae959b2e7c1facff1ac79ebef1e724a755f4b73379ad829b1709afe04694f4"
	I0815 17:22:22.642241  454315 logs.go:123] Gathering logs for etcd [60e325f91056711a4513bef2c23702d9ddc7dfce8ccd9867256b9475e1c4ba53] ...
	I0815 17:22:22.642267  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e325f91056711a4513bef2c23702d9ddc7dfce8ccd9867256b9475e1c4ba53"
	I0815 17:22:22.696492  454315 logs.go:123] Gathering logs for kube-scheduler [abb82ec13d568ed5a831df402d591035cdf0cd3b32fa2950e5f7bc6a84349200] ...
	I0815 17:22:22.696542  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb82ec13d568ed5a831df402d591035cdf0cd3b32fa2950e5f7bc6a84349200"
	I0815 17:22:22.739496  454315 logs.go:123] Gathering logs for kube-proxy [9204d53785c7b7f5e97f61f1c81a8f63fd48a25e2b7d7da49ada6fefaf9cddee] ...
	I0815 17:22:22.739526  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9204d53785c7b7f5e97f61f1c81a8f63fd48a25e2b7d7da49ada6fefaf9cddee"
	I0815 17:22:22.788359  454315 logs.go:123] Gathering logs for kube-controller-manager [bd602fbe8737f2c5f07fdc2cc9e56d63cb4d355deacb09efb04f205dc99c4007] ...
	I0815 17:22:22.788394  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd602fbe8737f2c5f07fdc2cc9e56d63cb4d355deacb09efb04f205dc99c4007"
	I0815 17:22:22.820907  454315 logs.go:123] Gathering logs for container status ...
	I0815 17:22:22.820942  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:22:25.359419  454315 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 17:22:25.364789  454315 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 17:22:25.364881  454315 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0815 17:22:25.364891  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:25.364902  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:25.364910  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:25.371073  454315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 17:22:25.371174  454315 api_server.go:141] control plane version: v1.31.0
	I0815 17:22:25.371191  454315 api_server.go:131] duration metric: took 4.218563515s to wait for apiserver health ...
	I0815 17:22:25.371199  454315 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:22:25.371223  454315 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:22:25.371268  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:22:25.406778  454315 cri.go:89] found id: "be964e8ce3d999438106338414e7dc4ae8dca6b53c490c9d1c8303ffeffe0b0e"
	I0815 17:22:25.406799  454315 cri.go:89] found id: "47ae959b2e7c1facff1ac79ebef1e724a755f4b73379ad829b1709afe04694f4"
	I0815 17:22:25.406804  454315 cri.go:89] found id: ""
	I0815 17:22:25.406812  454315 logs.go:276] 2 containers: [be964e8ce3d999438106338414e7dc4ae8dca6b53c490c9d1c8303ffeffe0b0e 47ae959b2e7c1facff1ac79ebef1e724a755f4b73379ad829b1709afe04694f4]
	I0815 17:22:25.406856  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:25.410318  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:25.413511  454315 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 17:22:25.413576  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:22:25.444388  454315 cri.go:89] found id: "829f2733fb289f48baf40f9c4fb5b0035383064f89df2e9022e91d641dfbd1c9"
	I0815 17:22:25.444415  454315 cri.go:89] found id: "60e325f91056711a4513bef2c23702d9ddc7dfce8ccd9867256b9475e1c4ba53"
	I0815 17:22:25.444420  454315 cri.go:89] found id: ""
	I0815 17:22:25.444428  454315 logs.go:276] 2 containers: [829f2733fb289f48baf40f9c4fb5b0035383064f89df2e9022e91d641dfbd1c9 60e325f91056711a4513bef2c23702d9ddc7dfce8ccd9867256b9475e1c4ba53]
	I0815 17:22:25.444481  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:25.447773  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:25.450690  454315 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 17:22:25.450746  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:22:25.482036  454315 cri.go:89] found id: ""
	I0815 17:22:25.482059  454315 logs.go:276] 0 containers: []
	W0815 17:22:25.482067  454315 logs.go:278] No container was found matching "coredns"
	I0815 17:22:25.482075  454315 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:22:25.482135  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:22:25.514580  454315 cri.go:89] found id: "abb82ec13d568ed5a831df402d591035cdf0cd3b32fa2950e5f7bc6a84349200"
	I0815 17:22:25.514603  454315 cri.go:89] found id: "e8981a415b72c23c61bdd29c13d49ea2995b03aa4a269423d4ead8429b5af60b"
	I0815 17:22:25.514607  454315 cri.go:89] found id: ""
	I0815 17:22:25.514615  454315 logs.go:276] 2 containers: [abb82ec13d568ed5a831df402d591035cdf0cd3b32fa2950e5f7bc6a84349200 e8981a415b72c23c61bdd29c13d49ea2995b03aa4a269423d4ead8429b5af60b]
	I0815 17:22:25.514676  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:25.518009  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:25.521004  454315 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:22:25.521050  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:22:25.551968  454315 cri.go:89] found id: "9204d53785c7b7f5e97f61f1c81a8f63fd48a25e2b7d7da49ada6fefaf9cddee"
	I0815 17:22:25.551993  454315 cri.go:89] found id: ""
	I0815 17:22:25.552009  454315 logs.go:276] 1 containers: [9204d53785c7b7f5e97f61f1c81a8f63fd48a25e2b7d7da49ada6fefaf9cddee]
	I0815 17:22:25.552075  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:25.555516  454315 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:22:25.555577  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:22:25.587618  454315 cri.go:89] found id: "dad8cf1fe916b5a81431472e3da03e6227182661b3f6f6346275ea0763d731fe"
	I0815 17:22:25.587641  454315 cri.go:89] found id: "bd602fbe8737f2c5f07fdc2cc9e56d63cb4d355deacb09efb04f205dc99c4007"
	I0815 17:22:25.587647  454315 cri.go:89] found id: ""
	I0815 17:22:25.587657  454315 logs.go:276] 2 containers: [dad8cf1fe916b5a81431472e3da03e6227182661b3f6f6346275ea0763d731fe bd602fbe8737f2c5f07fdc2cc9e56d63cb4d355deacb09efb04f205dc99c4007]
	I0815 17:22:25.587710  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:25.591477  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:25.594636  454315 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 17:22:25.594687  454315 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:22:25.627214  454315 cri.go:89] found id: "fc56d2dd5cf8ec051279c9c74eabbe0e58d2696b8155224ca0d8af8148db88a6"
	I0815 17:22:25.627235  454315 cri.go:89] found id: ""
	I0815 17:22:25.627243  454315 logs.go:276] 1 containers: [fc56d2dd5cf8ec051279c9c74eabbe0e58d2696b8155224ca0d8af8148db88a6]
	I0815 17:22:25.627286  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:25.630647  454315 logs.go:123] Gathering logs for dmesg ...
	I0815 17:22:25.630677  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:22:25.656292  454315 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:22:25.656320  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:22:25.832539  454315 logs.go:123] Gathering logs for kubelet ...
	I0815 17:22:25.832586  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 17:22:25.887822  454315 logs.go:123] Gathering logs for kube-scheduler [abb82ec13d568ed5a831df402d591035cdf0cd3b32fa2950e5f7bc6a84349200] ...
	I0815 17:22:25.887855  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb82ec13d568ed5a831df402d591035cdf0cd3b32fa2950e5f7bc6a84349200"
	I0815 17:22:25.922363  454315 logs.go:123] Gathering logs for kube-scheduler [e8981a415b72c23c61bdd29c13d49ea2995b03aa4a269423d4ead8429b5af60b] ...
	I0815 17:22:25.922393  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8981a415b72c23c61bdd29c13d49ea2995b03aa4a269423d4ead8429b5af60b"
	I0815 17:22:25.953628  454315 logs.go:123] Gathering logs for kube-proxy [9204d53785c7b7f5e97f61f1c81a8f63fd48a25e2b7d7da49ada6fefaf9cddee] ...
	I0815 17:22:25.953655  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9204d53785c7b7f5e97f61f1c81a8f63fd48a25e2b7d7da49ada6fefaf9cddee"
	I0815 17:22:25.984889  454315 logs.go:123] Gathering logs for kube-controller-manager [dad8cf1fe916b5a81431472e3da03e6227182661b3f6f6346275ea0763d731fe] ...
	I0815 17:22:25.984919  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dad8cf1fe916b5a81431472e3da03e6227182661b3f6f6346275ea0763d731fe"
	I0815 17:22:26.032367  454315 logs.go:123] Gathering logs for container status ...
	I0815 17:22:26.032397  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:22:26.069172  454315 logs.go:123] Gathering logs for kube-apiserver [be964e8ce3d999438106338414e7dc4ae8dca6b53c490c9d1c8303ffeffe0b0e] ...
	I0815 17:22:26.069203  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be964e8ce3d999438106338414e7dc4ae8dca6b53c490c9d1c8303ffeffe0b0e"
	I0815 17:22:26.110290  454315 logs.go:123] Gathering logs for kube-apiserver [47ae959b2e7c1facff1ac79ebef1e724a755f4b73379ad829b1709afe04694f4] ...
	I0815 17:22:26.110324  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47ae959b2e7c1facff1ac79ebef1e724a755f4b73379ad829b1709afe04694f4"
	I0815 17:22:26.146043  454315 logs.go:123] Gathering logs for etcd [829f2733fb289f48baf40f9c4fb5b0035383064f89df2e9022e91d641dfbd1c9] ...
	I0815 17:22:26.146071  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 829f2733fb289f48baf40f9c4fb5b0035383064f89df2e9022e91d641dfbd1c9"
	I0815 17:22:26.187962  454315 logs.go:123] Gathering logs for etcd [60e325f91056711a4513bef2c23702d9ddc7dfce8ccd9867256b9475e1c4ba53] ...
	I0815 17:22:26.187991  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e325f91056711a4513bef2c23702d9ddc7dfce8ccd9867256b9475e1c4ba53"
	I0815 17:22:26.231523  454315 logs.go:123] Gathering logs for kube-controller-manager [bd602fbe8737f2c5f07fdc2cc9e56d63cb4d355deacb09efb04f205dc99c4007] ...
	I0815 17:22:26.231558  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd602fbe8737f2c5f07fdc2cc9e56d63cb4d355deacb09efb04f205dc99c4007"
	I0815 17:22:26.263388  454315 logs.go:123] Gathering logs for kindnet [fc56d2dd5cf8ec051279c9c74eabbe0e58d2696b8155224ca0d8af8148db88a6] ...
	I0815 17:22:26.263417  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc56d2dd5cf8ec051279c9c74eabbe0e58d2696b8155224ca0d8af8148db88a6"
	I0815 17:22:26.300305  454315 logs.go:123] Gathering logs for CRI-O ...
	I0815 17:22:26.300334  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 17:22:28.856718  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:22:28.856738  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:28.856746  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:28.856749  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:28.863445  454315 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 17:22:28.871257  454315 system_pods.go:59] 26 kube-system pods found
	I0815 17:22:28.871309  454315 system_pods.go:61] "coredns-6f6b679f8f-lmnsh" [74ccd084-33a7-4529-919d-604b8750c354] Running
	I0815 17:22:28.871318  454315 system_pods.go:61] "coredns-6f6b679f8f-w6rw2" [3515df76-e41e-4c78-834f-5fbe2abc873d] Running
	I0815 17:22:28.871324  454315 system_pods.go:61] "etcd-ha-896691" [0a2ffa41-f65c-40fc-a35b-ea9f9db365ac] Running
	I0815 17:22:28.871337  454315 system_pods.go:61] "etcd-ha-896691-m02" [d028af14-3f5c-41f9-ac91-99ae705cf2b2] Running
	I0815 17:22:28.871343  454315 system_pods.go:61] "etcd-ha-896691-m03" [1101327d-2ac1-4210-906f-efc89ed60e64] Running
	I0815 17:22:28.871348  454315 system_pods.go:61] "kindnet-2bc4h" [5e118a8e-e9e4-45ee-94f7-654076df98d1] Running
	I0815 17:22:28.871354  454315 system_pods.go:61] "kindnet-8k6qn" [b4c2a221-3152-4594-8bf7-4f05626ac380] Running
	I0815 17:22:28.871359  454315 system_pods.go:61] "kindnet-9jffh" [6c800d06-5569-49ad-ae6f-3eb183c8ee5f] Running
	I0815 17:22:28.871367  454315 system_pods.go:61] "kindnet-qklml" [2c8d9dcc-4049-4948-b6ec-013a444bd983] Running
	I0815 17:22:28.871377  454315 system_pods.go:61] "kube-apiserver-ha-896691" [711da542-b0c9-44e7-86dc-ee202e3c8fd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 17:22:28.871394  454315 system_pods.go:61] "kube-apiserver-ha-896691-m02" [78aa1912-3696-4d32-beea-8ed41785c6fb] Running
	I0815 17:22:28.871402  454315 system_pods.go:61] "kube-apiserver-ha-896691-m03" [647e2c49-d59d-43ff-8149-f5d81d3ed071] Running
	I0815 17:22:28.871415  454315 system_pods.go:61] "kube-controller-manager-ha-896691" [6a9e2824-37af-4cdc-a6f6-897fd37b056e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 17:22:28.871421  454315 system_pods.go:61] "kube-controller-manager-ha-896691-m02" [af402e09-da87-4ce5-b722-e61c6e5df43b] Running
	I0815 17:22:28.871428  454315 system_pods.go:61] "kube-controller-manager-ha-896691-m03" [1b38cdb8-3607-4123-b4af-cb34e1899830] Running
	I0815 17:22:28.871433  454315 system_pods.go:61] "kube-proxy-74b2m" [c81582d5-063e-4bfa-a419-ef5d7c3422a1] Running
	I0815 17:22:28.871438  454315 system_pods.go:61] "kube-proxy-9m9tc" [6faed64d-d52e-4f36-8162-009d01da4ac8] Running
	I0815 17:22:28.871446  454315 system_pods.go:61] "kube-proxy-g4qhb" [125294c7-3523-4388-8a2d-5a199e1f2eef] Running
	I0815 17:22:28.871451  454315 system_pods.go:61] "kube-proxy-z4mvj" [7729789c-2a47-4633-831f-85fa51ebbc72] Running
	I0815 17:22:28.871457  454315 system_pods.go:61] "kube-scheduler-ha-896691" [64562846-8ad9-459d-af36-905c9c55c3c8] Running
	I0815 17:22:28.871461  454315 system_pods.go:61] "kube-scheduler-ha-896691-m02" [343b49bc-647a-42c0-a4dc-613e97613743] Running
	I0815 17:22:28.871466  454315 system_pods.go:61] "kube-scheduler-ha-896691-m03" [38e1f896-e7d7-47c2-a152-296284fab72e] Running
	I0815 17:22:28.871471  454315 system_pods.go:61] "kube-vip-ha-896691" [03e7e34d-56f7-40fb-b24c-864f9a08cdc7] Running
	I0815 17:22:28.871480  454315 system_pods.go:61] "kube-vip-ha-896691-m02" [8ac744c4-10fc-433d-900e-6d0cfb4f3ca4] Running
	I0815 17:22:28.871488  454315 system_pods.go:61] "kube-vip-ha-896691-m03" [2e3154ce-0dd7-426c-9409-fe00c0796ecc] Running
	I0815 17:22:28.871493  454315 system_pods.go:61] "storage-provisioner" [c53d929f-4e2b-4255-8189-e4d13aa590e4] Running
	I0815 17:22:28.871501  454315 system_pods.go:74] duration metric: took 3.500292558s to wait for pod list to return data ...
	I0815 17:22:28.871510  454315 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:22:28.871766  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:22:28.871795  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:28.871806  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:28.871816  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:28.874546  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:28.874733  454315 default_sa.go:45] found service account: "default"
	I0815 17:22:28.874750  454315 default_sa.go:55] duration metric: took 3.234607ms for default service account to be created ...
	I0815 17:22:28.874757  454315 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:22:28.874804  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:22:28.874811  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:28.874818  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:28.874826  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:28.878841  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:22:28.885803  454315 system_pods.go:86] 26 kube-system pods found
	I0815 17:22:28.885833  454315 system_pods.go:89] "coredns-6f6b679f8f-lmnsh" [74ccd084-33a7-4529-919d-604b8750c354] Running
	I0815 17:22:28.885842  454315 system_pods.go:89] "coredns-6f6b679f8f-w6rw2" [3515df76-e41e-4c78-834f-5fbe2abc873d] Running
	I0815 17:22:28.885848  454315 system_pods.go:89] "etcd-ha-896691" [0a2ffa41-f65c-40fc-a35b-ea9f9db365ac] Running
	I0815 17:22:28.885855  454315 system_pods.go:89] "etcd-ha-896691-m02" [d028af14-3f5c-41f9-ac91-99ae705cf2b2] Running
	I0815 17:22:28.885860  454315 system_pods.go:89] "etcd-ha-896691-m03" [1101327d-2ac1-4210-906f-efc89ed60e64] Running
	I0815 17:22:28.885864  454315 system_pods.go:89] "kindnet-2bc4h" [5e118a8e-e9e4-45ee-94f7-654076df98d1] Running
	I0815 17:22:28.885868  454315 system_pods.go:89] "kindnet-8k6qn" [b4c2a221-3152-4594-8bf7-4f05626ac380] Running
	I0815 17:22:28.885871  454315 system_pods.go:89] "kindnet-9jffh" [6c800d06-5569-49ad-ae6f-3eb183c8ee5f] Running
	I0815 17:22:28.885880  454315 system_pods.go:89] "kindnet-qklml" [2c8d9dcc-4049-4948-b6ec-013a444bd983] Running
	I0815 17:22:28.885895  454315 system_pods.go:89] "kube-apiserver-ha-896691" [711da542-b0c9-44e7-86dc-ee202e3c8fd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 17:22:28.885905  454315 system_pods.go:89] "kube-apiserver-ha-896691-m02" [78aa1912-3696-4d32-beea-8ed41785c6fb] Running
	I0815 17:22:28.885915  454315 system_pods.go:89] "kube-apiserver-ha-896691-m03" [647e2c49-d59d-43ff-8149-f5d81d3ed071] Running
	I0815 17:22:28.885926  454315 system_pods.go:89] "kube-controller-manager-ha-896691" [6a9e2824-37af-4cdc-a6f6-897fd37b056e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 17:22:28.885933  454315 system_pods.go:89] "kube-controller-manager-ha-896691-m02" [af402e09-da87-4ce5-b722-e61c6e5df43b] Running
	I0815 17:22:28.885937  454315 system_pods.go:89] "kube-controller-manager-ha-896691-m03" [1b38cdb8-3607-4123-b4af-cb34e1899830] Running
	I0815 17:22:28.885942  454315 system_pods.go:89] "kube-proxy-74b2m" [c81582d5-063e-4bfa-a419-ef5d7c3422a1] Running
	I0815 17:22:28.885945  454315 system_pods.go:89] "kube-proxy-9m9tc" [6faed64d-d52e-4f36-8162-009d01da4ac8] Running
	I0815 17:22:28.885952  454315 system_pods.go:89] "kube-proxy-g4qhb" [125294c7-3523-4388-8a2d-5a199e1f2eef] Running
	I0815 17:22:28.885956  454315 system_pods.go:89] "kube-proxy-z4mvj" [7729789c-2a47-4633-831f-85fa51ebbc72] Running
	I0815 17:22:28.885959  454315 system_pods.go:89] "kube-scheduler-ha-896691" [64562846-8ad9-459d-af36-905c9c55c3c8] Running
	I0815 17:22:28.885965  454315 system_pods.go:89] "kube-scheduler-ha-896691-m02" [343b49bc-647a-42c0-a4dc-613e97613743] Running
	I0815 17:22:28.885969  454315 system_pods.go:89] "kube-scheduler-ha-896691-m03" [38e1f896-e7d7-47c2-a152-296284fab72e] Running
	I0815 17:22:28.885973  454315 system_pods.go:89] "kube-vip-ha-896691" [03e7e34d-56f7-40fb-b24c-864f9a08cdc7] Running
	I0815 17:22:28.885976  454315 system_pods.go:89] "kube-vip-ha-896691-m02" [8ac744c4-10fc-433d-900e-6d0cfb4f3ca4] Running
	I0815 17:22:28.885981  454315 system_pods.go:89] "kube-vip-ha-896691-m03" [2e3154ce-0dd7-426c-9409-fe00c0796ecc] Running
	I0815 17:22:28.885984  454315 system_pods.go:89] "storage-provisioner" [c53d929f-4e2b-4255-8189-e4d13aa590e4] Running
	I0815 17:22:28.885992  454315 system_pods.go:126] duration metric: took 11.22883ms to wait for k8s-apps to be running ...
	I0815 17:22:28.886000  454315 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:22:28.886046  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:22:28.898769  454315 system_svc.go:56] duration metric: took 12.757869ms WaitForService to wait for kubelet
	I0815 17:22:28.898811  454315 kubeadm.go:582] duration metric: took 1m7.526021138s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:22:28.898838  454315 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:22:28.898919  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0815 17:22:28.898931  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:28.898942  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:28.898948  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:28.901554  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:28.903068  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:22:28.903092  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:22:28.903106  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:22:28.903109  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:22:28.903113  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:22:28.903116  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:22:28.903119  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:22:28.903121  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:22:28.903125  454315 node_conditions.go:105] duration metric: took 4.282075ms to run NodePressure ...
	I0815 17:22:28.903135  454315 start.go:241] waiting for startup goroutines ...
	I0815 17:22:28.903155  454315 start.go:255] writing updated cluster config ...
	I0815 17:22:28.905169  454315 out.go:201] 
	I0815 17:22:28.906535  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:22:28.906624  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	I0815 17:22:28.908146  454315 out.go:177] * Starting "ha-896691-m03" control-plane node in "ha-896691" cluster
	I0815 17:22:28.909424  454315 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 17:22:28.910394  454315 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:22:28.911438  454315 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:22:28.911454  454315 cache.go:56] Caching tarball of preloaded images
	I0815 17:22:28.911463  454315 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:22:28.911534  454315 preload.go:172] Found /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:22:28.911546  454315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:22:28.911641  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	W0815 17:22:28.931087  454315 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 17:22:28.931105  454315 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:22:28.931202  454315 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:22:28.931220  454315 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:22:28.931224  454315 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:22:28.931232  454315 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:22:28.931237  454315 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 17:22:28.932260  454315 image.go:273] response: 
	I0815 17:22:28.977621  454315 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 17:22:28.977664  454315 cache.go:194] Successfully downloaded all kic artifacts
	I0815 17:22:28.977700  454315 start.go:360] acquireMachinesLock for ha-896691-m03: {Name:mk8cb376beb93095ab8f2fe9a0671035ad003d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:22:28.977767  454315 start.go:364] duration metric: took 48.063µs to acquireMachinesLock for "ha-896691-m03"
	I0815 17:22:28.977786  454315 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:22:28.977793  454315 fix.go:54] fixHost starting: m03
	I0815 17:22:28.978015  454315 cli_runner.go:164] Run: docker container inspect ha-896691-m03 --format={{.State.Status}}
	I0815 17:22:28.993762  454315 fix.go:112] recreateIfNeeded on ha-896691-m03: state=Stopped err=<nil>
	W0815 17:22:28.993794  454315 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:22:28.995355  454315 out.go:177] * Restarting existing docker container for "ha-896691-m03" ...
	I0815 17:22:28.996511  454315 cli_runner.go:164] Run: docker start ha-896691-m03
	I0815 17:22:29.259039  454315 cli_runner.go:164] Run: docker container inspect ha-896691-m03 --format={{.State.Status}}
	I0815 17:22:29.278369  454315 kic.go:430] container "ha-896691-m03" state is running.
	I0815 17:22:29.278757  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m03
	I0815 17:22:29.296691  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	I0815 17:22:29.297004  454315 machine.go:93] provisionDockerMachine start ...
	I0815 17:22:29.297064  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m03
	I0815 17:22:29.314859  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:22:29.315143  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0815 17:22:29.315161  454315 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:22:29.316197  454315 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55436->127.0.0.1:33188: read: connection reset by peer
	I0815 17:22:32.452067  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896691-m03
	
	I0815 17:22:32.452100  454315 ubuntu.go:169] provisioning hostname "ha-896691-m03"
	I0815 17:22:32.452171  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m03
	I0815 17:22:32.474175  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:22:32.474409  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0815 17:22:32.474430  454315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-896691-m03 && echo "ha-896691-m03" | sudo tee /etc/hostname
	I0815 17:22:32.619358  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896691-m03
	
	I0815 17:22:32.619444  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m03
	I0815 17:22:32.638093  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:22:32.638292  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0815 17:22:32.638316  454315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-896691-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-896691-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-896691-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:22:32.957582  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:22:32.957624  454315 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19450-377193/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-377193/.minikube}
	I0815 17:22:32.957646  454315 ubuntu.go:177] setting up certificates
	I0815 17:22:32.957658  454315 provision.go:84] configureAuth start
	I0815 17:22:32.957728  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m03
	I0815 17:22:32.978635  454315 provision.go:143] copyHostCerts
	I0815 17:22:32.978683  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem
	I0815 17:22:32.978721  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem, removing ...
	I0815 17:22:32.978734  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem
	I0815 17:22:32.978814  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem (1078 bytes)
	I0815 17:22:32.978915  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem
	I0815 17:22:32.978943  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem, removing ...
	I0815 17:22:32.978958  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem
	I0815 17:22:32.979001  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem (1123 bytes)
	I0815 17:22:32.979069  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem
	I0815 17:22:32.979099  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem, removing ...
	I0815 17:22:32.979108  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem
	I0815 17:22:32.979143  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem (1675 bytes)
	I0815 17:22:32.979214  454315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem org=jenkins.ha-896691-m03 san=[127.0.0.1 192.168.49.4 ha-896691-m03 localhost minikube]
	I0815 17:22:33.298090  454315 provision.go:177] copyRemoteCerts
	I0815 17:22:33.298150  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:22:33.298187  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m03
	I0815 17:22:33.316216  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m03/id_rsa Username:docker}
	I0815 17:22:33.413587  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:22:33.413670  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 17:22:33.435473  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:22:33.435544  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:22:33.457285  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:22:33.457361  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:22:33.479216  454315 provision.go:87] duration metric: took 521.53788ms to configureAuth
	I0815 17:22:33.479246  454315 ubuntu.go:193] setting minikube options for container-runtime
	I0815 17:22:33.479472  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:22:33.479588  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m03
	I0815 17:22:33.496511  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:22:33.496719  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0815 17:22:33.496739  454315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:22:33.874881  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:22:33.874912  454315 machine.go:96] duration metric: took 4.577891437s to provisionDockerMachine
	I0815 17:22:33.874927  454315 start.go:293] postStartSetup for "ha-896691-m03" (driver="docker")
	I0815 17:22:33.874939  454315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:22:33.874990  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:22:33.875029  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m03
	I0815 17:22:33.892198  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m03/id_rsa Username:docker}
	I0815 17:22:34.066433  454315 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:22:34.071979  454315 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 17:22:34.072025  454315 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 17:22:34.072039  454315 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 17:22:34.072048  454315 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 17:22:34.072061  454315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/addons for local assets ...
	I0815 17:22:34.072121  454315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/files for local assets ...
	I0815 17:22:34.072215  454315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> 3840912.pem in /etc/ssl/certs
	I0815 17:22:34.072225  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> /etc/ssl/certs/3840912.pem
	I0815 17:22:34.072346  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:22:34.084377  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem --> /etc/ssl/certs/3840912.pem (1708 bytes)
	I0815 17:22:34.189545  454315 start.go:296] duration metric: took 314.599811ms for postStartSetup
	I0815 17:22:34.189651  454315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:22:34.189731  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m03
	I0815 17:22:34.207829  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m03/id_rsa Username:docker}
	I0815 17:22:34.461514  454315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 17:22:34.471194  454315 fix.go:56] duration metric: took 5.493388908s for fixHost
	I0815 17:22:34.471225  454315 start.go:83] releasing machines lock for "ha-896691-m03", held for 5.493444567s
	I0815 17:22:34.471321  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m03
	I0815 17:22:34.502307  454315 out.go:177] * Found network options:
	I0815 17:22:34.503646  454315 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0815 17:22:34.504877  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:22:34.504897  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:22:34.504918  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:22:34.504928  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:22:34.504999  454315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:22:34.505039  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m03
	I0815 17:22:34.505064  454315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:22:34.505119  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m03
	I0815 17:22:34.521574  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m03/id_rsa Username:docker}
	I0815 17:22:34.522004  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m03/id_rsa Username:docker}
	I0815 17:22:34.858201  454315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 17:22:34.892195  454315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:22:34.955966  454315 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 17:22:34.956071  454315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:22:34.966202  454315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 17:22:34.966229  454315 start.go:495] detecting cgroup driver to use...
	I0815 17:22:34.966267  454315 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 17:22:34.966314  454315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:22:34.977676  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:22:34.988350  454315 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:22:34.988409  454315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:22:35.056586  454315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:22:35.067901  454315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:22:35.374398  454315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:22:35.659720  454315 docker.go:233] disabling docker service ...
	I0815 17:22:35.659802  454315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:22:35.674248  454315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:22:35.686355  454315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:22:35.898646  454315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:22:36.083518  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:22:36.095806  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:22:36.113499  454315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:22:36.113558  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:22:36.123587  454315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:22:36.123649  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:22:36.160998  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:22:36.172072  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:22:36.182108  454315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:22:36.190706  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:22:36.199972  454315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:22:36.208445  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:22:36.218193  454315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:22:36.226610  454315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:22:36.234301  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:22:36.391348  454315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:22:37.171073  454315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:22:37.171133  454315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:22:37.174464  454315 start.go:563] Will wait 60s for crictl version
	I0815 17:22:37.174514  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:22:37.177714  454315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:22:37.210008  454315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 17:22:37.210100  454315 ssh_runner.go:195] Run: crio --version
	I0815 17:22:37.243479  454315 ssh_runner.go:195] Run: crio --version
	I0815 17:22:37.279364  454315 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 17:22:37.280669  454315 out.go:177]   - env NO_PROXY=192.168.49.2
	I0815 17:22:37.282040  454315 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0815 17:22:37.283273  454315 cli_runner.go:164] Run: docker network inspect ha-896691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:22:37.300343  454315 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 17:22:37.304027  454315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:22:37.313986  454315 mustload.go:65] Loading cluster: ha-896691
	I0815 17:22:37.314237  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:22:37.314430  454315 cli_runner.go:164] Run: docker container inspect ha-896691 --format={{.State.Status}}
	I0815 17:22:37.330589  454315 host.go:66] Checking if "ha-896691" exists ...
	I0815 17:22:37.330838  454315 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691 for IP: 192.168.49.4
	I0815 17:22:37.330851  454315 certs.go:194] generating shared ca certs ...
	I0815 17:22:37.330864  454315 certs.go:226] acquiring lock for ca certs: {Name:mkf196aaefcb61003123eeb327e0f1a70bf4bfe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:22:37.330992  454315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key
	I0815 17:22:37.331028  454315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key
	I0815 17:22:37.331038  454315 certs.go:256] generating profile certs ...
	I0815 17:22:37.331130  454315 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.key
	I0815 17:22:37.331190  454315 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key.7ec55f94
	I0815 17:22:37.331239  454315 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.key
	I0815 17:22:37.331252  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:22:37.331267  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:22:37.331280  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:22:37.331292  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:22:37.331304  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:22:37.331316  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:22:37.331332  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:22:37.331344  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:22:37.331392  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem (1338 bytes)
	W0815 17:22:37.331421  454315 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091_empty.pem, impossibly tiny 0 bytes
	I0815 17:22:37.331430  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 17:22:37.331451  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem (1078 bytes)
	I0815 17:22:37.331473  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:22:37.331493  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem (1675 bytes)
	I0815 17:22:37.331529  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem (1708 bytes)
	I0815 17:22:37.331557  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> /usr/share/ca-certificates/3840912.pem
	I0815 17:22:37.331570  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:22:37.331582  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem -> /usr/share/ca-certificates/384091.pem
	I0815 17:22:37.331629  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:22:37.348383  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691/id_rsa Username:docker}
	I0815 17:22:37.436872  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 17:22:37.440403  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 17:22:37.451432  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 17:22:37.454299  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 17:22:37.464888  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 17:22:37.467905  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 17:22:37.479608  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 17:22:37.482690  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 17:22:37.493573  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 17:22:37.496404  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 17:22:37.507600  454315 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 17:22:37.510698  454315 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0815 17:22:37.521434  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:22:37.543254  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:22:37.565010  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:22:37.588243  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 17:22:37.609985  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 17:22:37.630792  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 17:22:37.651698  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:22:37.672877  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:22:37.694722  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem --> /usr/share/ca-certificates/3840912.pem (1708 bytes)
	I0815 17:22:37.716777  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:22:37.737791  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem --> /usr/share/ca-certificates/384091.pem (1338 bytes)
	I0815 17:22:37.759274  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 17:22:37.774933  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 17:22:37.790774  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 17:22:37.806603  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 17:22:37.821887  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 17:22:37.837702  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0815 17:22:37.853340  454315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 17:22:37.869675  454315 ssh_runner.go:195] Run: openssl version
	I0815 17:22:37.874474  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384091.pem && ln -fs /usr/share/ca-certificates/384091.pem /etc/ssl/certs/384091.pem"
	I0815 17:22:37.883166  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384091.pem
	I0815 17:22:37.886592  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:14 /usr/share/ca-certificates/384091.pem
	I0815 17:22:37.886657  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384091.pem
	I0815 17:22:37.893432  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384091.pem /etc/ssl/certs/51391683.0"
	I0815 17:22:37.902045  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3840912.pem && ln -fs /usr/share/ca-certificates/3840912.pem /etc/ssl/certs/3840912.pem"
	I0815 17:22:37.910711  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3840912.pem
	I0815 17:22:37.913825  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:14 /usr/share/ca-certificates/3840912.pem
	I0815 17:22:37.913879  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3840912.pem
	I0815 17:22:37.920044  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3840912.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:22:37.928108  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:22:37.936756  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:22:37.939727  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:22:37.939793  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:22:37.945857  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:22:37.953381  454315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:22:37.956350  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 17:22:37.962154  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 17:22:37.968844  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 17:22:37.977181  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 17:22:37.983525  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 17:22:37.989785  454315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 17:22:37.996872  454315 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.31.0 crio true true} ...
	I0815 17:22:37.996983  454315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-896691-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-896691 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:22:37.997018  454315 kube-vip.go:115] generating kube-vip config ...
	I0815 17:22:37.997054  454315 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0815 17:22:38.009982  454315 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:22:38.010049  454315 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:22:38.010101  454315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:22:38.018254  454315 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:22:38.018305  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 17:22:38.027312  454315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 17:22:38.060141  454315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:22:38.076372  454315 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 17:22:38.092284  454315 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:22:38.095504  454315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:22:38.106000  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:22:38.202028  454315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:22:38.212492  454315 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:22:38.212798  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:22:38.214426  454315 out.go:177] * Verifying Kubernetes components...
	I0815 17:22:38.215542  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:22:38.307974  454315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:22:38.318663  454315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:22:38.318916  454315 kapi.go:59] client config for ha-896691: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.key", CAFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 17:22:38.318982  454315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0815 17:22:38.319213  454315 node_ready.go:35] waiting up to 6m0s for node "ha-896691-m03" to be "Ready" ...
	I0815 17:22:38.319292  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:38.319300  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:38.319307  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:38.319312  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:38.321796  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:38.819651  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:38.819672  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:38.819680  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:38.819686  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:38.822298  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:39.320046  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:39.320066  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:39.320077  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:39.320084  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:39.322657  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:39.819423  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:39.819445  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:39.819456  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:39.819461  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:39.822002  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:40.319918  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:40.319943  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:40.319955  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:40.319960  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:40.322616  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:40.323187  454315 node_ready.go:53] node "ha-896691-m03" has status "Ready":"Unknown"
	I0815 17:22:40.819459  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:40.819482  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:40.819491  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:40.819495  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:40.822429  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:41.320257  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:41.320275  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:41.320283  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:41.320287  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:41.322790  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:41.819778  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:41.819798  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:41.819806  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:41.819811  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:41.821966  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:42.319757  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:42.319776  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.319787  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.319794  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.322284  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:42.820052  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:42.820076  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.820088  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.820093  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.822803  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:42.823300  454315 node_ready.go:49] node "ha-896691-m03" has status "Ready":"True"
	I0815 17:22:42.823317  454315 node_ready.go:38] duration metric: took 4.504088193s for node "ha-896691-m03" to be "Ready" ...
	I0815 17:22:42.823325  454315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:22:42.823382  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:22:42.823390  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.823397  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.823401  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.829377  454315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 17:22:42.838268  454315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lmnsh" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:42.838361  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-lmnsh
	I0815 17:22:42.838371  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.838381  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.838390  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.840559  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:42.841092  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:42.841105  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.841113  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.841116  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.843000  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:42.843470  454315 pod_ready.go:93] pod "coredns-6f6b679f8f-lmnsh" in "kube-system" namespace has status "Ready":"True"
	I0815 17:22:42.843486  454315 pod_ready.go:82] duration metric: took 5.195351ms for pod "coredns-6f6b679f8f-lmnsh" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:42.843495  454315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-w6rw2" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:42.843541  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-w6rw2
	I0815 17:22:42.843548  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.843556  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.843560  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.845489  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:42.846024  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:42.846038  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.846046  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.846055  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.847680  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:42.848088  454315 pod_ready.go:93] pod "coredns-6f6b679f8f-w6rw2" in "kube-system" namespace has status "Ready":"True"
	I0815 17:22:42.848106  454315 pod_ready.go:82] duration metric: took 4.606136ms for pod "coredns-6f6b679f8f-w6rw2" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:42.848115  454315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:42.848159  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691
	I0815 17:22:42.848167  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.848174  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.848179  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.849832  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:42.850274  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:22:42.850289  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.850296  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.850301  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.852025  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:42.852427  454315 pod_ready.go:93] pod "etcd-ha-896691" in "kube-system" namespace has status "Ready":"True"
	I0815 17:22:42.852443  454315 pod_ready.go:82] duration metric: took 4.32286ms for pod "etcd-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:42.852452  454315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:42.852509  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m02
	I0815 17:22:42.852516  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.852523  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.852526  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.854396  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:42.855000  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:22:42.855019  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:42.855028  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:42.855032  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:42.856633  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:42.857083  454315 pod_ready.go:93] pod "etcd-ha-896691-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:22:42.857099  454315 pod_ready.go:82] duration metric: took 4.629839ms for pod "etcd-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:42.857107  454315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:22:43.020466  454315 request.go:632] Waited for 163.300279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:43.020541  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:43.020570  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:43.020581  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:43.020588  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:43.023039  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:43.220986  454315 request.go:632] Waited for 197.343134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:43.221068  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:43.221074  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:43.221082  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:43.221087  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:43.223684  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:43.420381  454315 request.go:632] Waited for 62.216442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:43.420435  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:43.420439  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:43.420447  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:43.420455  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:43.422876  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:43.620848  454315 request.go:632] Waited for 197.339671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:43.620908  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:43.620914  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:43.620921  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:43.620925  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:43.623203  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:43.857766  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:43.857785  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:43.857793  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:43.857797  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:43.860363  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:44.020339  454315 request.go:632] Waited for 159.313757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:44.020396  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:44.020403  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:44.020415  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:44.020425  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:44.022819  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:44.358225  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:44.358248  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:44.358259  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:44.358263  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:44.360840  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:44.420788  454315 request.go:632] Waited for 59.168293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:44.420864  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:44.420870  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:44.420878  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:44.420899  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:44.423482  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:44.857668  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:44.857687  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:44.857695  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:44.857704  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:44.860368  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:44.861034  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:44.861050  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:44.861059  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:44.861064  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:44.863128  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:44.863554  454315 pod_ready.go:103] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:45.357790  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:45.357810  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:45.357818  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:45.357823  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:45.360466  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:45.361140  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:45.361154  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:45.361161  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:45.361165  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:45.363261  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:45.857792  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:45.857813  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:45.857825  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:45.857830  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:45.860454  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:45.861110  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:45.861128  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:45.861135  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:45.861139  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:45.863191  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:46.357788  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:46.357809  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:46.357817  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:46.357822  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:46.360262  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:46.360917  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:46.360931  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:46.360938  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:46.360943  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:46.362923  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:46.857776  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:46.857799  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:46.857813  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:46.857818  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:46.860480  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:46.861261  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:46.861290  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:46.861302  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:46.861311  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:46.863412  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:46.863860  454315 pod_ready.go:103] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:47.357824  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:47.357859  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:47.357873  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:47.357878  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:47.360301  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:47.360955  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:47.360974  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:47.360985  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:47.360990  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:47.363124  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:47.857830  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:47.857850  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:47.857858  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:47.857862  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:47.860652  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:47.861413  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:47.861433  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:47.861444  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:47.861450  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:47.863622  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:48.357503  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:48.357523  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:48.357529  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:48.357532  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:48.360055  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:48.360710  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:48.360725  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:48.360732  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:48.360737  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:48.362717  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:48.857568  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:48.857590  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:48.857598  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:48.857602  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:48.860061  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:48.860734  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:48.860750  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:48.860756  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:48.860762  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:48.862727  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:22:49.357500  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:49.357519  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:49.357527  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:49.357531  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:49.360256  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:49.360896  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:49.360911  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:49.360919  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:49.360924  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:49.363102  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:49.363503  454315 pod_ready.go:103] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:49.858033  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:49.858054  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:49.858062  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:49.858067  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:49.861249  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:22:49.861882  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:49.861901  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:49.861911  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:49.861917  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:49.863973  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:50.357765  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:50.357788  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:50.357796  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:50.357801  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:50.360617  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:50.361323  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:50.361338  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:50.361346  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:50.361351  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:50.363384  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:50.857262  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:50.857282  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:50.857290  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:50.857294  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:50.860076  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:50.860707  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:50.860723  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:50.860731  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:50.860734  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:50.862865  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:51.357718  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:51.357742  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:51.357754  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:51.357761  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:51.360419  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:51.361154  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:51.361174  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:51.361185  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:51.361191  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:51.363519  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:51.364076  454315 pod_ready.go:103] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:51.858332  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:51.858361  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:51.858374  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:51.858379  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:51.861089  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:51.861746  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:51.861762  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:51.861769  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:51.861773  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:51.864020  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:52.357817  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:52.357842  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:52.357852  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:52.357859  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:52.360604  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:52.361246  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:52.361263  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:52.361271  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:52.361274  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:52.363406  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:52.858042  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:52.858067  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:52.858078  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:52.858089  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:52.861276  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:22:52.862022  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:52.862042  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:52.862052  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:52.862058  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:52.864314  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:53.357806  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:53.357826  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:53.357834  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:53.357838  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:53.360644  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:53.361254  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:53.361269  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:53.361280  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:53.361285  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:53.363493  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:53.857840  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:53.857865  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:53.857877  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:53.857886  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:53.860622  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:53.861476  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:53.861492  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:53.861500  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:53.861504  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:53.863862  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:53.864432  454315 pod_ready.go:103] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:54.357666  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:54.357690  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:54.357701  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:54.357709  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:54.360480  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:54.361151  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:54.361167  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:54.361175  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:54.361179  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:54.363285  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:54.858140  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:54.858159  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:54.858167  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:54.858171  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:54.860730  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:54.861318  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:54.861335  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:54.861342  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:54.861346  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:54.863398  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:55.358268  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:55.358286  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:55.358295  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:55.358300  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:55.370740  454315 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0815 17:22:55.372076  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:55.372144  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:55.372167  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:55.372186  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:55.375106  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:55.857793  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:55.857812  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:55.857820  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:55.857826  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:55.860484  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:55.861171  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:55.861187  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:55.861195  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:55.861199  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:55.863228  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:56.357853  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:56.357873  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:56.357882  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:56.357886  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:56.360356  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:56.360975  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:56.360989  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:56.360996  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:56.360999  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:56.363035  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:56.363476  454315 pod_ready.go:103] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:56.857779  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:56.857798  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:56.857806  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:56.857811  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:56.860439  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:56.861222  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:56.861242  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:56.861253  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:56.861258  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:56.863317  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:57.357797  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:57.357817  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:57.357825  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:57.357830  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:57.360491  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:57.361284  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:57.361303  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:57.361315  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:57.361321  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:57.363451  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:57.858091  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:57.858110  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:57.858118  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:57.858121  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:57.860675  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:57.861337  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:57.861353  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:57.861359  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:57.861363  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:57.863562  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:58.357429  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:58.357449  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:58.357460  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:58.357467  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:58.360053  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:58.360743  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:58.360760  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:58.360767  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:58.360772  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:58.362790  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:58.857608  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:58.857628  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:58.857635  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:58.857640  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:58.860402  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:58.861109  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:58.861126  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:58.861133  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:58.861138  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:58.863274  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:58.863744  454315 pod_ready.go:103] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 17:22:59.357803  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:59.357828  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:59.357837  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:59.357842  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:59.360658  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:59.361314  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:59.361330  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:59.361336  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:59.361340  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:59.363359  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:59.857914  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:22:59.857933  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:59.857941  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:59.857944  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:59.860532  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:22:59.861115  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:22:59.861130  454315 round_trippers.go:469] Request Headers:
	I0815 17:22:59.861138  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:22:59.861141  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:22:59.863109  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:00.357794  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:00.357817  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:00.357827  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:00.357833  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:00.360406  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:00.361009  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:00.361026  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:00.361034  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:00.361037  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:00.362898  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:00.857716  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:00.857737  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:00.857745  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:00.857749  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:00.860347  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:00.860980  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:00.860996  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:00.861003  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:00.861006  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:00.863141  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:01.357788  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:01.357808  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:01.357816  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:01.357821  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:01.360588  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:01.361248  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:01.361263  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:01.361270  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:01.361275  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:01.363329  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:01.363808  454315 pod_ready.go:103] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 17:23:01.857292  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:01.857312  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:01.857321  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:01.857325  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:01.860064  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:01.860847  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:01.860865  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:01.860875  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:01.860880  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:01.863126  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:02.357811  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:02.357834  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:02.357844  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:02.357849  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:02.360489  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:02.361150  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:02.361168  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:02.361175  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:02.361179  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:02.363249  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:02.857865  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:02.857887  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:02.857896  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:02.857899  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:02.860696  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:02.861419  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:02.861439  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:02.861449  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:02.861456  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:02.863840  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:03.358303  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:03.358328  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:03.358340  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:03.358346  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:03.361150  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:03.361723  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:03.361736  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:03.361746  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:03.361751  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:03.363769  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:03.364224  454315 pod_ready.go:103] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"False"
	I0815 17:23:03.857543  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:03.857568  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:03.857578  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:03.857583  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:03.869385  454315 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0815 17:23:03.870097  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:03.870116  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:03.870126  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:03.870131  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:03.881891  454315 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0815 17:23:04.357821  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:04.357843  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.357853  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.357858  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.360629  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:04.361284  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:04.361301  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.361308  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.361311  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.363647  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:04.857889  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:04.857909  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.857918  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.857922  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.860745  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:04.861514  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:04.861532  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.861543  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.861551  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.863633  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:04.864207  454315 pod_ready.go:93] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:04.864226  454315 pod_ready.go:82] duration metric: took 22.007112207s for pod "etcd-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:04.864257  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:04.864322  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:23:04.864331  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.864345  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.864356  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.866658  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:04.867253  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:04.867267  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.867274  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.867277  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.869416  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:04.870019  454315 pod_ready.go:98] node "ha-896691" hosting pod "kube-apiserver-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:04.870048  454315 pod_ready.go:82] duration metric: took 5.779229ms for pod "kube-apiserver-ha-896691" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:04.870060  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "kube-apiserver-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:04.870069  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:04.870133  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m02
	I0815 17:23:04.870142  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.870153  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.870160  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.872196  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:04.872873  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:04.872889  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.872899  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.872906  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.874894  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:04.875407  454315 pod_ready.go:93] pod "kube-apiserver-ha-896691-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:04.875429  454315 pod_ready.go:82] duration metric: took 5.349861ms for pod "kube-apiserver-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:04.875442  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:04.875502  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m03
	I0815 17:23:04.875512  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.875522  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.875529  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.878001  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:04.878700  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:04.878715  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.878722  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.878725  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.880724  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:04.881250  454315 pod_ready.go:93] pod "kube-apiserver-ha-896691-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:04.881269  454315 pod_ready.go:82] duration metric: took 5.820077ms for pod "kube-apiserver-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:04.881280  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:04.881336  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691
	I0815 17:23:04.881346  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.881356  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.881367  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.883314  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:04.884028  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:04.884047  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:04.884058  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:04.884064  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:04.886694  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:04.887265  454315 pod_ready.go:98] node "ha-896691" hosting pod "kube-controller-manager-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:04.887329  454315 pod_ready.go:82] duration metric: took 6.037031ms for pod "kube-controller-manager-ha-896691" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:04.887346  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "kube-controller-manager-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:04.887355  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:05.058736  454315 request.go:632] Waited for 171.287232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m02
	I0815 17:23:05.058810  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m02
	I0815 17:23:05.058815  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:05.058825  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:05.058832  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:05.061559  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:05.258595  454315 request.go:632] Waited for 196.343411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:05.258664  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:05.258672  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:05.258687  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:05.258696  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:05.261257  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:05.261712  454315 pod_ready.go:93] pod "kube-controller-manager-ha-896691-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:05.261734  454315 pod_ready.go:82] duration metric: took 374.371652ms for pod "kube-controller-manager-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:05.261744  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:05.458868  454315 request.go:632] Waited for 197.034882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m03
	I0815 17:23:05.458923  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m03
	I0815 17:23:05.458929  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:05.458939  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:05.458948  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:05.461765  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:05.658818  454315 request.go:632] Waited for 196.341382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:05.658883  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:05.658890  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:05.658901  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:05.658913  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:05.661586  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:05.662078  454315 pod_ready.go:93] pod "kube-controller-manager-ha-896691-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:05.662098  454315 pod_ready.go:82] duration metric: took 400.348016ms for pod "kube-controller-manager-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:05.662108  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-74b2m" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:05.858173  454315 request.go:632] Waited for 195.965596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-74b2m
	I0815 17:23:05.858236  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-74b2m
	I0815 17:23:05.858243  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:05.858253  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:05.858265  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:05.861042  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:06.057936  454315 request.go:632] Waited for 196.293831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:06.058045  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:06.058055  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:06.058073  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:06.058083  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:06.060728  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:06.061324  454315 pod_ready.go:93] pod "kube-proxy-74b2m" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:06.061384  454315 pod_ready.go:82] duration metric: took 399.266613ms for pod "kube-proxy-74b2m" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:06.061402  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9m9tc" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:06.258259  454315 request.go:632] Waited for 196.751313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9m9tc
	I0815 17:23:06.258316  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9m9tc
	I0815 17:23:06.258321  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:06.258329  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:06.258333  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:06.261208  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:06.458041  454315 request.go:632] Waited for 196.219863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:06.458106  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:06.458121  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:06.458135  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:06.458146  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:06.461099  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:06.461804  454315 pod_ready.go:98] node "ha-896691" hosting pod "kube-proxy-9m9tc" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:06.461836  454315 pod_ready.go:82] duration metric: took 400.424534ms for pod "kube-proxy-9m9tc" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:06.461850  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "kube-proxy-9m9tc" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:06.461860  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g4qhb" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:06.658772  454315 request.go:632] Waited for 196.812999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb
	I0815 17:23:06.658840  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb
	I0815 17:23:06.658847  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:06.658860  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:06.658870  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:06.662119  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:23:06.858014  454315 request.go:632] Waited for 195.281243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:06.858087  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:06.858102  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:06.858113  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:06.858141  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:06.860968  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:06.861461  454315 pod_ready.go:98] node "ha-896691-m04" hosting pod "kube-proxy-g4qhb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691-m04" has status "Ready":"Unknown"
	I0815 17:23:06.861484  454315 pod_ready.go:82] duration metric: took 399.616849ms for pod "kube-proxy-g4qhb" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:06.861492  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691-m04" hosting pod "kube-proxy-g4qhb" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691-m04" has status "Ready":"Unknown"
	I0815 17:23:06.861500  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z4mvj" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:07.058451  454315 request.go:632] Waited for 196.882856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z4mvj
	I0815 17:23:07.058545  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z4mvj
	I0815 17:23:07.058558  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:07.058570  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:07.058576  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:07.061379  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:07.258420  454315 request.go:632] Waited for 196.346315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:07.258508  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:07.258517  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:07.258525  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:07.258532  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:07.261222  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:07.261839  454315 pod_ready.go:93] pod "kube-proxy-z4mvj" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:07.261862  454315 pod_ready.go:82] duration metric: took 400.351973ms for pod "kube-proxy-z4mvj" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:07.261875  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:07.458817  454315 request.go:632] Waited for 196.859748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691
	I0815 17:23:07.458909  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691
	I0815 17:23:07.458920  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:07.458930  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:07.458936  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:07.461400  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:07.658290  454315 request.go:632] Waited for 196.335059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:07.658359  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:07.658365  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:07.658373  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:07.658380  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:07.660984  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:07.661513  454315 pod_ready.go:98] node "ha-896691" hosting pod "kube-scheduler-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:07.661535  454315 pod_ready.go:82] duration metric: took 399.648042ms for pod "kube-scheduler-ha-896691" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:07.661546  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "kube-scheduler-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:07.661552  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:07.858499  454315 request.go:632] Waited for 196.867839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m02
	I0815 17:23:07.858553  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m02
	I0815 17:23:07.858558  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:07.858566  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:07.858571  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:07.861090  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:08.058024  454315 request.go:632] Waited for 196.267575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:08.058079  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:08.058086  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:08.058097  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:08.058107  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:08.060662  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:08.061141  454315 pod_ready.go:93] pod "kube-scheduler-ha-896691-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:08.061162  454315 pod_ready.go:82] duration metric: took 399.603468ms for pod "kube-scheduler-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:08.061171  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:08.258239  454315 request.go:632] Waited for 196.993831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m03
	I0815 17:23:08.258308  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m03
	I0815 17:23:08.258315  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:08.258324  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:08.258333  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:08.260930  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:08.458801  454315 request.go:632] Waited for 197.336694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:08.458888  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:08.458898  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:08.458905  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:08.458909  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:08.461692  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:08.462126  454315 pod_ready.go:93] pod "kube-scheduler-ha-896691-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:08.462145  454315 pod_ready.go:82] duration metric: took 400.968096ms for pod "kube-scheduler-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:08.462160  454315 pod_ready.go:39] duration metric: took 25.638825028s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:23:08.462178  454315 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:23:08.462232  454315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:23:08.473165  454315 api_server.go:72] duration metric: took 30.260630546s to wait for apiserver process to appear ...
	I0815 17:23:08.473187  454315 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:23:08.473210  454315 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 17:23:08.477801  454315 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 17:23:08.477861  454315 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0815 17:23:08.477869  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:08.477876  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:08.477882  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:08.478635  454315 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 17:23:08.478728  454315 api_server.go:141] control plane version: v1.31.0
	I0815 17:23:08.478752  454315 api_server.go:131] duration metric: took 5.556861ms to wait for apiserver health ...
	I0815 17:23:08.478762  454315 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:23:08.658237  454315 request.go:632] Waited for 179.386754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:23:08.658318  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:23:08.658329  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:08.658340  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:08.658352  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:08.663319  454315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:23:08.670345  454315 system_pods.go:59] 26 kube-system pods found
	I0815 17:23:08.670375  454315 system_pods.go:61] "coredns-6f6b679f8f-lmnsh" [74ccd084-33a7-4529-919d-604b8750c354] Running
	I0815 17:23:08.670381  454315 system_pods.go:61] "coredns-6f6b679f8f-w6rw2" [3515df76-e41e-4c78-834f-5fbe2abc873d] Running
	I0815 17:23:08.670385  454315 system_pods.go:61] "etcd-ha-896691" [0a2ffa41-f65c-40fc-a35b-ea9f9db365ac] Running
	I0815 17:23:08.670389  454315 system_pods.go:61] "etcd-ha-896691-m02" [d028af14-3f5c-41f9-ac91-99ae705cf2b2] Running
	I0815 17:23:08.670392  454315 system_pods.go:61] "etcd-ha-896691-m03" [1101327d-2ac1-4210-906f-efc89ed60e64] Running
	I0815 17:23:08.670397  454315 system_pods.go:61] "kindnet-2bc4h" [5e118a8e-e9e4-45ee-94f7-654076df98d1] Running
	I0815 17:23:08.670401  454315 system_pods.go:61] "kindnet-8k6qn" [b4c2a221-3152-4594-8bf7-4f05626ac380] Running
	I0815 17:23:08.670404  454315 system_pods.go:61] "kindnet-9jffh" [6c800d06-5569-49ad-ae6f-3eb183c8ee5f] Running
	I0815 17:23:08.670407  454315 system_pods.go:61] "kindnet-qklml" [2c8d9dcc-4049-4948-b6ec-013a444bd983] Running
	I0815 17:23:08.670411  454315 system_pods.go:61] "kube-apiserver-ha-896691" [711da542-b0c9-44e7-86dc-ee202e3c8fd8] Running
	I0815 17:23:08.670415  454315 system_pods.go:61] "kube-apiserver-ha-896691-m02" [78aa1912-3696-4d32-beea-8ed41785c6fb] Running
	I0815 17:23:08.670418  454315 system_pods.go:61] "kube-apiserver-ha-896691-m03" [647e2c49-d59d-43ff-8149-f5d81d3ed071] Running
	I0815 17:23:08.670421  454315 system_pods.go:61] "kube-controller-manager-ha-896691" [6a9e2824-37af-4cdc-a6f6-897fd37b056e] Running
	I0815 17:23:08.670425  454315 system_pods.go:61] "kube-controller-manager-ha-896691-m02" [af402e09-da87-4ce5-b722-e61c6e5df43b] Running
	I0815 17:23:08.670428  454315 system_pods.go:61] "kube-controller-manager-ha-896691-m03" [1b38cdb8-3607-4123-b4af-cb34e1899830] Running
	I0815 17:23:08.670432  454315 system_pods.go:61] "kube-proxy-74b2m" [c81582d5-063e-4bfa-a419-ef5d7c3422a1] Running
	I0815 17:23:08.670435  454315 system_pods.go:61] "kube-proxy-9m9tc" [6faed64d-d52e-4f36-8162-009d01da4ac8] Running
	I0815 17:23:08.670438  454315 system_pods.go:61] "kube-proxy-g4qhb" [125294c7-3523-4388-8a2d-5a199e1f2eef] Running
	I0815 17:23:08.670441  454315 system_pods.go:61] "kube-proxy-z4mvj" [7729789c-2a47-4633-831f-85fa51ebbc72] Running
	I0815 17:23:08.670444  454315 system_pods.go:61] "kube-scheduler-ha-896691" [64562846-8ad9-459d-af36-905c9c55c3c8] Running
	I0815 17:23:08.670447  454315 system_pods.go:61] "kube-scheduler-ha-896691-m02" [343b49bc-647a-42c0-a4dc-613e97613743] Running
	I0815 17:23:08.670451  454315 system_pods.go:61] "kube-scheduler-ha-896691-m03" [38e1f896-e7d7-47c2-a152-296284fab72e] Running
	I0815 17:23:08.670454  454315 system_pods.go:61] "kube-vip-ha-896691" [03e7e34d-56f7-40fb-b24c-864f9a08cdc7] Running
	I0815 17:23:08.670457  454315 system_pods.go:61] "kube-vip-ha-896691-m02" [8ac744c4-10fc-433d-900e-6d0cfb4f3ca4] Running
	I0815 17:23:08.670459  454315 system_pods.go:61] "kube-vip-ha-896691-m03" [2e3154ce-0dd7-426c-9409-fe00c0796ecc] Running
	I0815 17:23:08.670462  454315 system_pods.go:61] "storage-provisioner" [c53d929f-4e2b-4255-8189-e4d13aa590e4] Running
	I0815 17:23:08.670469  454315 system_pods.go:74] duration metric: took 191.700659ms to wait for pod list to return data ...
	I0815 17:23:08.670480  454315 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:23:08.858792  454315 request.go:632] Waited for 188.216419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:23:08.858865  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:23:08.858874  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:08.858888  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:08.858895  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:08.861681  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:08.861800  454315 default_sa.go:45] found service account: "default"
	I0815 17:23:08.861815  454315 default_sa.go:55] duration metric: took 191.328528ms for default service account to be created ...
	I0815 17:23:08.861825  454315 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:23:09.058252  454315 request.go:632] Waited for 196.347698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:23:09.058310  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:23:09.058315  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:09.058323  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:09.058329  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:09.063506  454315 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 17:23:09.070360  454315 system_pods.go:86] 26 kube-system pods found
	I0815 17:23:09.070385  454315 system_pods.go:89] "coredns-6f6b679f8f-lmnsh" [74ccd084-33a7-4529-919d-604b8750c354] Running
	I0815 17:23:09.070391  454315 system_pods.go:89] "coredns-6f6b679f8f-w6rw2" [3515df76-e41e-4c78-834f-5fbe2abc873d] Running
	I0815 17:23:09.070395  454315 system_pods.go:89] "etcd-ha-896691" [0a2ffa41-f65c-40fc-a35b-ea9f9db365ac] Running
	I0815 17:23:09.070399  454315 system_pods.go:89] "etcd-ha-896691-m02" [d028af14-3f5c-41f9-ac91-99ae705cf2b2] Running
	I0815 17:23:09.070403  454315 system_pods.go:89] "etcd-ha-896691-m03" [1101327d-2ac1-4210-906f-efc89ed60e64] Running
	I0815 17:23:09.070407  454315 system_pods.go:89] "kindnet-2bc4h" [5e118a8e-e9e4-45ee-94f7-654076df98d1] Running
	I0815 17:23:09.070412  454315 system_pods.go:89] "kindnet-8k6qn" [b4c2a221-3152-4594-8bf7-4f05626ac380] Running
	I0815 17:23:09.070420  454315 system_pods.go:89] "kindnet-9jffh" [6c800d06-5569-49ad-ae6f-3eb183c8ee5f] Running
	I0815 17:23:09.070426  454315 system_pods.go:89] "kindnet-qklml" [2c8d9dcc-4049-4948-b6ec-013a444bd983] Running
	I0815 17:23:09.070434  454315 system_pods.go:89] "kube-apiserver-ha-896691" [711da542-b0c9-44e7-86dc-ee202e3c8fd8] Running
	I0815 17:23:09.070440  454315 system_pods.go:89] "kube-apiserver-ha-896691-m02" [78aa1912-3696-4d32-beea-8ed41785c6fb] Running
	I0815 17:23:09.070448  454315 system_pods.go:89] "kube-apiserver-ha-896691-m03" [647e2c49-d59d-43ff-8149-f5d81d3ed071] Running
	I0815 17:23:09.070458  454315 system_pods.go:89] "kube-controller-manager-ha-896691" [6a9e2824-37af-4cdc-a6f6-897fd37b056e] Running
	I0815 17:23:09.070468  454315 system_pods.go:89] "kube-controller-manager-ha-896691-m02" [af402e09-da87-4ce5-b722-e61c6e5df43b] Running
	I0815 17:23:09.070474  454315 system_pods.go:89] "kube-controller-manager-ha-896691-m03" [1b38cdb8-3607-4123-b4af-cb34e1899830] Running
	I0815 17:23:09.070478  454315 system_pods.go:89] "kube-proxy-74b2m" [c81582d5-063e-4bfa-a419-ef5d7c3422a1] Running
	I0815 17:23:09.070482  454315 system_pods.go:89] "kube-proxy-9m9tc" [6faed64d-d52e-4f36-8162-009d01da4ac8] Running
	I0815 17:23:09.070488  454315 system_pods.go:89] "kube-proxy-g4qhb" [125294c7-3523-4388-8a2d-5a199e1f2eef] Running
	I0815 17:23:09.070491  454315 system_pods.go:89] "kube-proxy-z4mvj" [7729789c-2a47-4633-831f-85fa51ebbc72] Running
	I0815 17:23:09.070497  454315 system_pods.go:89] "kube-scheduler-ha-896691" [64562846-8ad9-459d-af36-905c9c55c3c8] Running
	I0815 17:23:09.070500  454315 system_pods.go:89] "kube-scheduler-ha-896691-m02" [343b49bc-647a-42c0-a4dc-613e97613743] Running
	I0815 17:23:09.070506  454315 system_pods.go:89] "kube-scheduler-ha-896691-m03" [38e1f896-e7d7-47c2-a152-296284fab72e] Running
	I0815 17:23:09.070509  454315 system_pods.go:89] "kube-vip-ha-896691" [03e7e34d-56f7-40fb-b24c-864f9a08cdc7] Running
	I0815 17:23:09.070513  454315 system_pods.go:89] "kube-vip-ha-896691-m02" [8ac744c4-10fc-433d-900e-6d0cfb4f3ca4] Running
	I0815 17:23:09.070516  454315 system_pods.go:89] "kube-vip-ha-896691-m03" [2e3154ce-0dd7-426c-9409-fe00c0796ecc] Running
	I0815 17:23:09.070521  454315 system_pods.go:89] "storage-provisioner" [c53d929f-4e2b-4255-8189-e4d13aa590e4] Running
	I0815 17:23:09.070530  454315 system_pods.go:126] duration metric: took 208.698057ms to wait for k8s-apps to be running ...
	I0815 17:23:09.070543  454315 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:23:09.070593  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:23:09.081510  454315 system_svc.go:56] duration metric: took 10.959214ms WaitForService to wait for kubelet
	I0815 17:23:09.081537  454315 kubeadm.go:582] duration metric: took 30.869006854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:23:09.081561  454315 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:23:09.257903  454315 request.go:632] Waited for 176.252827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0815 17:23:09.257970  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0815 17:23:09.257975  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:09.257983  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:09.257988  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:09.260495  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:09.262142  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:23:09.262166  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:23:09.262183  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:23:09.262189  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:23:09.262197  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:23:09.262202  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:23:09.262207  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:23:09.262212  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:23:09.262221  454315 node_conditions.go:105] duration metric: took 180.654425ms to run NodePressure ...
	I0815 17:23:09.262235  454315 start.go:241] waiting for startup goroutines ...
	I0815 17:23:09.262261  454315 start.go:255] writing updated cluster config ...
	I0815 17:23:09.264372  454315 out.go:201] 
	I0815 17:23:09.265768  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:23:09.265853  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	I0815 17:23:09.267416  454315 out.go:177] * Starting "ha-896691-m04" worker node in "ha-896691" cluster
	I0815 17:23:09.268687  454315 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 17:23:09.269894  454315 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:23:09.270978  454315 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:23:09.271000  454315 cache.go:56] Caching tarball of preloaded images
	I0815 17:23:09.271004  454315 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:23:09.271119  454315 preload.go:172] Found /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:23:09.271135  454315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:23:09.271237  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	W0815 17:23:09.290515  454315 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 17:23:09.290538  454315 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:23:09.290616  454315 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:23:09.290633  454315 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:23:09.290639  454315 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:23:09.290648  454315 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:23:09.290658  454315 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 17:23:09.291711  454315 image.go:273] response: 
	I0815 17:23:09.339743  454315 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 17:23:09.339784  454315 cache.go:194] Successfully downloaded all kic artifacts
	I0815 17:23:09.339838  454315 start.go:360] acquireMachinesLock for ha-896691-m04: {Name:mkd36b2365d09b49e34c39fb92383a5139a997c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:23:09.339911  454315 start.go:364] duration metric: took 49.256µs to acquireMachinesLock for "ha-896691-m04"
	I0815 17:23:09.339936  454315 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:23:09.339946  454315 fix.go:54] fixHost starting: m04
	I0815 17:23:09.340171  454315 cli_runner.go:164] Run: docker container inspect ha-896691-m04 --format={{.State.Status}}
	I0815 17:23:09.357385  454315 fix.go:112] recreateIfNeeded on ha-896691-m04: state=Stopped err=<nil>
	W0815 17:23:09.357418  454315 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:23:09.360156  454315 out.go:177] * Restarting existing docker container for "ha-896691-m04" ...
	I0815 17:23:09.361331  454315 cli_runner.go:164] Run: docker start ha-896691-m04
	I0815 17:23:09.618673  454315 cli_runner.go:164] Run: docker container inspect ha-896691-m04 --format={{.State.Status}}
	I0815 17:23:09.635258  454315 kic.go:430] container "ha-896691-m04" state is running.
	I0815 17:23:09.635619  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m04
	I0815 17:23:09.653618  454315 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/config.json ...
	I0815 17:23:09.653840  454315 machine.go:93] provisionDockerMachine start ...
	I0815 17:23:09.653914  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m04
	I0815 17:23:09.671731  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:23:09.671948  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0815 17:23:09.671966  454315 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:23:09.672725  454315 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38046->127.0.0.1:33193: read: connection reset by peer
	I0815 17:23:12.811866  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896691-m04
	
	I0815 17:23:12.811903  454315 ubuntu.go:169] provisioning hostname "ha-896691-m04"
	I0815 17:23:12.811958  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m04
	I0815 17:23:12.828074  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:23:12.828322  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0815 17:23:12.828340  454315 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-896691-m04 && echo "ha-896691-m04" | sudo tee /etc/hostname
	I0815 17:23:12.971204  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896691-m04
	
	I0815 17:23:12.971281  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m04
	I0815 17:23:12.989270  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:23:12.989452  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0815 17:23:12.989470  454315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-896691-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-896691-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-896691-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:23:13.120651  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:23:13.120685  454315 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19450-377193/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-377193/.minikube}
	I0815 17:23:13.120708  454315 ubuntu.go:177] setting up certificates
	I0815 17:23:13.120721  454315 provision.go:84] configureAuth start
	I0815 17:23:13.120781  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m04
	I0815 17:23:13.137836  454315 provision.go:143] copyHostCerts
	I0815 17:23:13.137886  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem
	I0815 17:23:13.137924  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem, removing ...
	I0815 17:23:13.137937  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem
	I0815 17:23:13.138009  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/cert.pem (1123 bytes)
	I0815 17:23:13.138095  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem
	I0815 17:23:13.138113  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem, removing ...
	I0815 17:23:13.138124  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem
	I0815 17:23:13.138149  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/key.pem (1675 bytes)
	I0815 17:23:13.138210  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem
	I0815 17:23:13.138226  454315 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem, removing ...
	I0815 17:23:13.138230  454315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem
	I0815 17:23:13.138254  454315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-377193/.minikube/ca.pem (1078 bytes)
	I0815 17:23:13.138315  454315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem org=jenkins.ha-896691-m04 san=[127.0.0.1 192.168.49.5 ha-896691-m04 localhost minikube]
	I0815 17:23:13.219398  454315 provision.go:177] copyRemoteCerts
	I0815 17:23:13.219457  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:23:13.219493  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m04
	I0815 17:23:13.236337  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m04/id_rsa Username:docker}
	I0815 17:23:13.337039  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:23:13.337102  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 17:23:13.359693  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:23:13.359766  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:23:13.381931  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:23:13.382005  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 17:23:13.405016  454315 provision.go:87] duration metric: took 284.283017ms to configureAuth
	I0815 17:23:13.405045  454315 ubuntu.go:193] setting minikube options for container-runtime
	I0815 17:23:13.405265  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:23:13.405371  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m04
	I0815 17:23:13.422391  454315 main.go:141] libmachine: Using SSH client type: native
	I0815 17:23:13.422571  454315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0815 17:23:13.422588  454315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:23:13.669318  454315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:23:13.669345  454315 machine.go:96] duration metric: took 4.015487703s to provisionDockerMachine
	I0815 17:23:13.669360  454315 start.go:293] postStartSetup for "ha-896691-m04" (driver="docker")
	I0815 17:23:13.669372  454315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:23:13.669441  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:23:13.669488  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m04
	I0815 17:23:13.686937  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m04/id_rsa Username:docker}
	I0815 17:23:13.785445  454315 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:23:13.788779  454315 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 17:23:13.788825  454315 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 17:23:13.788838  454315 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 17:23:13.788846  454315 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 17:23:13.788859  454315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/addons for local assets ...
	I0815 17:23:13.788934  454315 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-377193/.minikube/files for local assets ...
	I0815 17:23:13.789026  454315 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> 3840912.pem in /etc/ssl/certs
	I0815 17:23:13.789040  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> /etc/ssl/certs/3840912.pem
	I0815 17:23:13.789152  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:23:13.797541  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem --> /etc/ssl/certs/3840912.pem (1708 bytes)
	I0815 17:23:13.818904  454315 start.go:296] duration metric: took 149.52879ms for postStartSetup
	I0815 17:23:13.818996  454315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:23:13.819044  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m04
	I0815 17:23:13.836055  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m04/id_rsa Username:docker}
	I0815 17:23:13.929479  454315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 17:23:13.934109  454315 fix.go:56] duration metric: took 4.594156407s for fixHost
	I0815 17:23:13.934138  454315 start.go:83] releasing machines lock for "ha-896691-m04", held for 4.59421256s
	I0815 17:23:13.934211  454315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m04
	I0815 17:23:13.952878  454315 out.go:177] * Found network options:
	I0815 17:23:13.954248  454315 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W0815 17:23:13.955342  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:23:13.955374  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:23:13.955388  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:23:13.955416  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:23:13.955433  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:23:13.955446  454315 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:23:13.955524  454315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:23:13.955536  454315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:23:13.955576  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m04
	I0815 17:23:13.955608  454315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m04
	I0815 17:23:13.975288  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m04/id_rsa Username:docker}
	I0815 17:23:13.976407  454315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m04/id_rsa Username:docker}
	I0815 17:23:14.199730  454315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 17:23:14.204096  454315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:23:14.212454  454315 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0815 17:23:14.212514  454315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:23:14.220327  454315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 17:23:14.220352  454315 start.go:495] detecting cgroup driver to use...
	I0815 17:23:14.220386  454315 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 17:23:14.220482  454315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:23:14.231274  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:23:14.241597  454315 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:23:14.241647  454315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:23:14.253640  454315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:23:14.264951  454315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:23:14.332400  454315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:23:14.412982  454315 docker.go:233] disabling docker service ...
	I0815 17:23:14.413053  454315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:23:14.424659  454315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:23:14.434914  454315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:23:14.511400  454315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:23:14.598203  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:23:14.609100  454315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:23:14.623602  454315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:23:14.623660  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:23:14.632224  454315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:23:14.632287  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:23:14.641095  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:23:14.651034  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:23:14.659998  454315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:23:14.668397  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:23:14.677431  454315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:23:14.686110  454315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:23:14.694874  454315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:23:14.702294  454315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:23:14.710104  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:23:14.791042  454315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:23:14.892012  454315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:23:14.892088  454315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:23:14.895686  454315 start.go:563] Will wait 60s for crictl version
	I0815 17:23:14.895734  454315 ssh_runner.go:195] Run: which crictl
	I0815 17:23:14.898791  454315 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:23:14.930469  454315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0815 17:23:14.930542  454315 ssh_runner.go:195] Run: crio --version
	I0815 17:23:14.964777  454315 ssh_runner.go:195] Run: crio --version
	I0815 17:23:15.000392  454315 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0815 17:23:15.001836  454315 out.go:177]   - env NO_PROXY=192.168.49.2
	I0815 17:23:15.003260  454315 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0815 17:23:15.004389  454315 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I0815 17:23:15.005592  454315 cli_runner.go:164] Run: docker network inspect ha-896691 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:23:15.021679  454315 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 17:23:15.025077  454315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:23:15.035334  454315 mustload.go:65] Loading cluster: ha-896691
	I0815 17:23:15.035577  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:23:15.035868  454315 cli_runner.go:164] Run: docker container inspect ha-896691 --format={{.State.Status}}
	I0815 17:23:15.052132  454315 host.go:66] Checking if "ha-896691" exists ...
	I0815 17:23:15.052378  454315 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691 for IP: 192.168.49.5
	I0815 17:23:15.052390  454315 certs.go:194] generating shared ca certs ...
	I0815 17:23:15.052410  454315 certs.go:226] acquiring lock for ca certs: {Name:mkf196aaefcb61003123eeb327e0f1a70bf4bfe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:23:15.052534  454315 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key
	I0815 17:23:15.052629  454315 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key
	I0815 17:23:15.052650  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:23:15.052672  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:23:15.052690  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:23:15.052712  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:23:15.052778  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem (1338 bytes)
	W0815 17:23:15.052827  454315 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091_empty.pem, impossibly tiny 0 bytes
	I0815 17:23:15.052842  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 17:23:15.052878  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/ca.pem (1078 bytes)
	I0815 17:23:15.052911  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:23:15.052942  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/key.pem (1675 bytes)
	I0815 17:23:15.052995  454315 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem (1708 bytes)
	I0815 17:23:15.053040  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem -> /usr/share/ca-certificates/384091.pem
	I0815 17:23:15.053062  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem -> /usr/share/ca-certificates/3840912.pem
	I0815 17:23:15.053081  454315 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:23:15.053108  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:23:15.075471  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:23:15.096958  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:23:15.120148  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 17:23:15.141870  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/certs/384091.pem --> /usr/share/ca-certificates/384091.pem (1338 bytes)
	I0815 17:23:15.163829  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/ssl/certs/3840912.pem --> /usr/share/ca-certificates/3840912.pem (1708 bytes)
	I0815 17:23:15.185901  454315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:23:15.207115  454315 ssh_runner.go:195] Run: openssl version
	I0815 17:23:15.211910  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3840912.pem && ln -fs /usr/share/ca-certificates/3840912.pem /etc/ssl/certs/3840912.pem"
	I0815 17:23:15.220796  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3840912.pem
	I0815 17:23:15.223811  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:14 /usr/share/ca-certificates/3840912.pem
	I0815 17:23:15.223858  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3840912.pem
	I0815 17:23:15.230424  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3840912.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:23:15.238369  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:23:15.246558  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:23:15.250191  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:23:15.250249  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:23:15.256310  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:23:15.264192  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384091.pem && ln -fs /usr/share/ca-certificates/384091.pem /etc/ssl/certs/384091.pem"
	I0815 17:23:15.272884  454315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384091.pem
	I0815 17:23:15.275805  454315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:14 /usr/share/ca-certificates/384091.pem
	I0815 17:23:15.275863  454315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384091.pem
	I0815 17:23:15.282349  454315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384091.pem /etc/ssl/certs/51391683.0"
	I0815 17:23:15.290142  454315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:23:15.293097  454315 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:23:15.293139  454315 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.0  false true} ...
	I0815 17:23:15.293244  454315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-896691-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-896691 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:23:15.293301  454315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:23:15.300898  454315 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:23:15.300954  454315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0815 17:23:15.308301  454315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0815 17:23:15.324309  454315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:23:15.340641  454315 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:23:15.343735  454315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:23:15.353456  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:23:15.433259  454315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:23:15.443613  454315 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0815 17:23:15.443867  454315 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:23:15.445908  454315 out.go:177] * Verifying Kubernetes components...
	I0815 17:23:15.446956  454315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:23:15.520595  454315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:23:15.531883  454315 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:23:15.532114  454315 kapi.go:59] client config for ha-896691: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/profiles/ha-896691/client.key", CAFile:"/home/jenkins/minikube-integration/19450-377193/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 17:23:15.532179  454315 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0815 17:23:15.532387  454315 node_ready.go:35] waiting up to 6m0s for node "ha-896691-m04" to be "Ready" ...
	I0815 17:23:15.532462  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:15.532469  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:15.532476  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:15.532481  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:15.535132  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:16.032731  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:16.032751  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:16.032759  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:16.032764  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:16.035216  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:16.532723  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:16.532743  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:16.532753  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:16.532761  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:16.535330  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:17.033245  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:17.033265  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:17.033280  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:17.033285  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:17.035819  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:17.532708  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:17.532728  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:17.532737  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:17.532743  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:17.535231  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:17.535734  454315 node_ready.go:53] node "ha-896691-m04" has status "Ready":"Unknown"
	I0815 17:23:18.032718  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:18.032741  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:18.032749  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:18.032753  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:18.035205  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:18.532715  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:18.532737  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:18.532746  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:18.532753  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:18.535322  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:19.032908  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:19.032928  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:19.032937  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:19.032941  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:19.035438  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:19.533380  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:19.533399  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:19.533408  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:19.533414  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:19.535864  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:19.536338  454315 node_ready.go:53] node "ha-896691-m04" has status "Ready":"Unknown"
	I0815 17:23:20.032724  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:20.032745  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:20.032753  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:20.032757  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:20.035619  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:20.533354  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:20.533377  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:20.533385  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:20.533390  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:20.535959  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:21.032726  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:21.032748  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:21.032757  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:21.032763  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:21.035338  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:21.533219  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:21.533237  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:21.533245  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:21.533249  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:21.535670  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:22.033542  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:22.033564  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:22.033579  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:22.033588  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:22.036043  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:22.036572  454315 node_ready.go:53] node "ha-896691-m04" has status "Ready":"Unknown"
	I0815 17:23:22.532741  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:22.532767  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:22.532777  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:22.532782  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:22.535265  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:23.032747  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:23.032774  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.032785  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.032791  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.036465  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:23:23.533068  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:23.533094  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.533105  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.533111  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.535644  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:23.536188  454315 node_ready.go:49] node "ha-896691-m04" has status "Ready":"True"
	I0815 17:23:23.536211  454315 node_ready.go:38] duration metric: took 8.003809277s for node "ha-896691-m04" to be "Ready" ...
	I0815 17:23:23.536221  454315 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:23:23.536283  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0815 17:23:23.536292  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.536299  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.536303  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.540879  454315 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:23:23.547505  454315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lmnsh" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:23.547589  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-lmnsh
	I0815 17:23:23.547597  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.547605  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.547610  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.549730  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:23.550336  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:23.550351  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.550359  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.550364  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.552247  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:23.552760  454315 pod_ready.go:98] node "ha-896691" hosting pod "coredns-6f6b679f8f-lmnsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:23.552779  454315 pod_ready.go:82] duration metric: took 5.252295ms for pod "coredns-6f6b679f8f-lmnsh" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:23.552788  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "coredns-6f6b679f8f-lmnsh" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:23.552795  454315 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-w6rw2" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:23.552853  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-w6rw2
	I0815 17:23:23.552864  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.552873  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.552885  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.554725  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:23.555335  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:23.555350  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.555358  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.555365  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.557151  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:23.557673  454315 pod_ready.go:98] node "ha-896691" hosting pod "coredns-6f6b679f8f-w6rw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:23.557692  454315 pod_ready.go:82] duration metric: took 4.889749ms for pod "coredns-6f6b679f8f-w6rw2" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:23.557700  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "coredns-6f6b679f8f-w6rw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:23.557706  454315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:23.557754  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691
	I0815 17:23:23.557762  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.557768  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.557777  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.559576  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:23.560024  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:23.560039  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.560049  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.560053  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.561718  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:23.562130  454315 pod_ready.go:98] node "ha-896691" hosting pod "etcd-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:23.562148  454315 pod_ready.go:82] duration metric: took 4.433661ms for pod "etcd-ha-896691" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:23.562156  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "etcd-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:23.562164  454315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:23.562216  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m02
	I0815 17:23:23.562224  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.562230  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.562233  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.563842  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:23.564336  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:23.564352  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.564361  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.564366  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.565962  454315 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:23:23.566395  454315 pod_ready.go:93] pod "etcd-ha-896691-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:23.566413  454315 pod_ready.go:82] duration metric: took 4.23836ms for pod "etcd-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:23.566425  454315 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:23.733730  454315 request.go:632] Waited for 167.222299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:23.733794  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896691-m03
	I0815 17:23:23.733802  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.733816  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.733826  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.736229  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:23.933146  454315 request.go:632] Waited for 196.276163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:23.933212  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:23.933228  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:23.933239  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:23.933246  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:23.935757  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:23.936252  454315 pod_ready.go:93] pod "etcd-ha-896691-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:23.936270  454315 pod_ready.go:82] duration metric: took 369.834632ms for pod "etcd-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:23.936291  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:24.133408  454315 request.go:632] Waited for 197.036194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:23:24.133489  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691
	I0815 17:23:24.133498  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:24.133507  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:24.133522  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:24.136238  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:24.333227  454315 request.go:632] Waited for 196.278303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:24.333303  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:24.333312  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:24.333320  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:24.333323  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:24.335933  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:24.336474  454315 pod_ready.go:98] node "ha-896691" hosting pod "kube-apiserver-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:24.336496  454315 pod_ready.go:82] duration metric: took 400.196111ms for pod "kube-apiserver-ha-896691" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:24.336508  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "kube-apiserver-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:24.336514  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:24.533448  454315 request.go:632] Waited for 196.834461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m02
	I0815 17:23:24.533530  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m02
	I0815 17:23:24.533539  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:24.533547  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:24.533553  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:24.536167  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:24.733209  454315 request.go:632] Waited for 196.274655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:24.733264  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:24.733271  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:24.733281  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:24.733292  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:24.735699  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:24.736361  454315 pod_ready.go:93] pod "kube-apiserver-ha-896691-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:24.736401  454315 pod_ready.go:82] duration metric: took 399.877508ms for pod "kube-apiserver-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:24.736419  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:24.933473  454315 request.go:632] Waited for 196.949994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m03
	I0815 17:23:24.933568  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896691-m03
	I0815 17:23:24.933579  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:24.933590  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:24.933597  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:24.936197  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:25.133204  454315 request.go:632] Waited for 196.267019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:25.133267  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:25.133275  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:25.133286  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:25.133298  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:25.135795  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:25.136363  454315 pod_ready.go:93] pod "kube-apiserver-ha-896691-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:25.136385  454315 pod_ready.go:82] duration metric: took 399.956796ms for pod "kube-apiserver-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:25.136395  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:25.333389  454315 request.go:632] Waited for 196.913791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691
	I0815 17:23:25.333447  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691
	I0815 17:23:25.333453  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:25.333461  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:25.333467  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:25.336268  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:25.533211  454315 request.go:632] Waited for 196.276083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:25.533283  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:25.533295  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:25.533307  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:25.533312  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:25.535728  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:25.536266  454315 pod_ready.go:98] node "ha-896691" hosting pod "kube-controller-manager-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:25.536290  454315 pod_ready.go:82] duration metric: took 399.884325ms for pod "kube-controller-manager-ha-896691" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:25.536298  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "kube-controller-manager-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:25.536305  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:25.733282  454315 request.go:632] Waited for 196.894035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m02
	I0815 17:23:25.733344  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m02
	I0815 17:23:25.733352  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:25.733361  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:25.733369  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:25.735749  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:25.933715  454315 request.go:632] Waited for 197.231918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:25.933782  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:25.933791  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:25.933829  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:25.933840  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:25.936302  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:25.936811  454315 pod_ready.go:93] pod "kube-controller-manager-ha-896691-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:25.936831  454315 pod_ready.go:82] duration metric: took 400.516852ms for pod "kube-controller-manager-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:25.936842  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:26.133756  454315 request.go:632] Waited for 196.830418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m03
	I0815 17:23:26.133837  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896691-m03
	I0815 17:23:26.133849  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:26.133861  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:26.133871  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:26.136692  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:26.333681  454315 request.go:632] Waited for 196.350672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:26.333759  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:26.333769  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:26.333779  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:26.333786  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:26.336121  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:26.336681  454315 pod_ready.go:93] pod "kube-controller-manager-ha-896691-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:26.336705  454315 pod_ready.go:82] duration metric: took 399.85578ms for pod "kube-controller-manager-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:26.336718  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-74b2m" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:26.533680  454315 request.go:632] Waited for 196.884327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-74b2m
	I0815 17:23:26.533765  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-74b2m
	I0815 17:23:26.533774  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:26.533782  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:26.533787  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:26.536674  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:26.733630  454315 request.go:632] Waited for 196.35665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:26.733709  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:26.733716  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:26.733725  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:26.733731  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:26.736207  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:26.736766  454315 pod_ready.go:93] pod "kube-proxy-74b2m" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:26.736786  454315 pod_ready.go:82] duration metric: took 400.060639ms for pod "kube-proxy-74b2m" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:26.736795  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9m9tc" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:26.933854  454315 request.go:632] Waited for 196.962983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9m9tc
	I0815 17:23:26.933943  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9m9tc
	I0815 17:23:26.933954  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:26.933966  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:26.933978  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:26.936485  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:27.133376  454315 request.go:632] Waited for 196.284907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:27.133431  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:27.133436  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:27.133453  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:27.133473  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:27.136213  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:27.136793  454315 pod_ready.go:98] node "ha-896691" hosting pod "kube-proxy-9m9tc" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:27.136816  454315 pod_ready.go:82] duration metric: took 400.01178ms for pod "kube-proxy-9m9tc" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:27.136825  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "kube-proxy-9m9tc" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:27.136833  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g4qhb" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:27.333712  454315 request.go:632] Waited for 196.801311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb
	I0815 17:23:27.333796  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb
	I0815 17:23:27.333801  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:27.333809  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:27.333815  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:27.336244  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:27.533129  454315 request.go:632] Waited for 196.261726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:27.533205  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:27.533213  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:27.533220  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:27.533223  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:27.535682  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:27.733343  454315 request.go:632] Waited for 96.265693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb
	I0815 17:23:27.733410  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb
	I0815 17:23:27.733415  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:27.733423  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:27.733431  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:27.736350  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:27.933492  454315 request.go:632] Waited for 196.37637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:27.933569  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:27.933577  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:27.933596  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:27.933604  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:27.935912  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:28.137434  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb
	I0815 17:23:28.137455  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:28.137464  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:28.137468  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:28.140244  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:28.333246  454315 request.go:632] Waited for 192.267701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:28.333349  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:28.333365  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:28.333376  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:28.333384  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:28.335784  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:28.637563  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g4qhb
	I0815 17:23:28.637588  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:28.637606  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:28.637612  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:28.640067  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:28.734119  454315 request.go:632] Waited for 93.239131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:28.734202  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m04
	I0815 17:23:28.734208  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:28.734216  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:28.734220  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:28.736707  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:28.737203  454315 pod_ready.go:93] pod "kube-proxy-g4qhb" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:28.737222  454315 pod_ready.go:82] duration metric: took 1.600380322s for pod "kube-proxy-g4qhb" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:28.737238  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z4mvj" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:28.933675  454315 request.go:632] Waited for 196.348194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z4mvj
	I0815 17:23:28.933735  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z4mvj
	I0815 17:23:28.933742  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:28.933751  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:28.933759  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:28.936227  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:29.133155  454315 request.go:632] Waited for 196.274907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:29.133218  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:29.133223  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:29.133230  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:29.133235  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:29.135703  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:29.136152  454315 pod_ready.go:93] pod "kube-proxy-z4mvj" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:29.136170  454315 pod_ready.go:82] duration metric: took 398.922892ms for pod "kube-proxy-z4mvj" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:29.136179  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896691" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:29.333293  454315 request.go:632] Waited for 197.028199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691
	I0815 17:23:29.333351  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691
	I0815 17:23:29.333356  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:29.333364  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:29.333371  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:29.336048  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:29.533863  454315 request.go:632] Waited for 197.211055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:29.533928  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691
	I0815 17:23:29.533933  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:29.533941  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:29.533948  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:29.536586  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:29.537208  454315 pod_ready.go:98] node "ha-896691" hosting pod "kube-scheduler-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:29.537235  454315 pod_ready.go:82] duration metric: took 401.047317ms for pod "kube-scheduler-ha-896691" in "kube-system" namespace to be "Ready" ...
	E0815 17:23:29.537249  454315 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896691" hosting pod "kube-scheduler-ha-896691" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896691" has status "Ready":"Unknown"
	I0815 17:23:29.537258  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:29.734082  454315 request.go:632] Waited for 196.743141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m02
	I0815 17:23:29.734163  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m02
	I0815 17:23:29.734173  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:29.734181  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:29.734189  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:29.736693  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:29.933291  454315 request.go:632] Waited for 195.945307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:29.933373  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m02
	I0815 17:23:29.933382  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:29.933390  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:29.933396  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:29.935961  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:29.936531  454315 pod_ready.go:93] pod "kube-scheduler-ha-896691-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:29.936567  454315 pod_ready.go:82] duration metric: took 399.296909ms for pod "kube-scheduler-ha-896691-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:29.936581  454315 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:30.133513  454315 request.go:632] Waited for 196.845764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m03
	I0815 17:23:30.133572  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896691-m03
	I0815 17:23:30.133578  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:30.133592  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:30.133601  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:30.136199  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:30.334081  454315 request.go:632] Waited for 197.337296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:30.334154  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896691-m03
	I0815 17:23:30.334161  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:30.334169  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:30.334176  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:30.336957  454315 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:23:30.337432  454315 pod_ready.go:93] pod "kube-scheduler-ha-896691-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:23:30.337451  454315 pod_ready.go:82] duration metric: took 400.862918ms for pod "kube-scheduler-ha-896691-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:23:30.337462  454315 pod_ready.go:39] duration metric: took 6.801232165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:23:30.337478  454315 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:23:30.337546  454315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:23:30.348231  454315 system_svc.go:56] duration metric: took 10.744843ms WaitForService to wait for kubelet
	I0815 17:23:30.348261  454315 kubeadm.go:582] duration metric: took 14.904603421s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:23:30.348289  454315 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:23:30.533712  454315 request.go:632] Waited for 185.334946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0815 17:23:30.533766  454315 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0815 17:23:30.533772  454315 round_trippers.go:469] Request Headers:
	I0815 17:23:30.533780  454315 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:23:30.533786  454315 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:23:30.536983  454315 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:23:30.538134  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:23:30.538154  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:23:30.538165  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:23:30.538169  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:23:30.538173  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:23:30.538177  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:23:30.538180  454315 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0815 17:23:30.538183  454315 node_conditions.go:123] node cpu capacity is 8
	I0815 17:23:30.538186  454315 node_conditions.go:105] duration metric: took 189.888516ms to run NodePressure ...
	I0815 17:23:30.538197  454315 start.go:241] waiting for startup goroutines ...
	I0815 17:23:30.538220  454315 start.go:255] writing updated cluster config ...
	I0815 17:23:30.538499  454315 ssh_runner.go:195] Run: rm -f paused
	I0815 17:23:30.587910  454315 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 17:23:30.590203  454315 out.go:177] * Done! kubectl is now configured to use "ha-896691" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 17:22:21 ha-896691 crio[681]: time="2024-08-15 17:22:21.387465862Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 15 17:22:21 ha-896691 crio[681]: time="2024-08-15 17:22:21.451595214Z" level=info msg="Created container 7a6ee8f1f6ba74f9237852ddefb98cc346f7f75942da852b4e9487a38499e02c: kube-system/kube-apiserver-ha-896691/kube-apiserver" id=9e51c758-069c-452c-a696-04db96783cb3 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 17:22:21 ha-896691 crio[681]: time="2024-08-15 17:22:21.452240615Z" level=info msg="Starting container: 7a6ee8f1f6ba74f9237852ddefb98cc346f7f75942da852b4e9487a38499e02c" id=ea988748-9b14-4f80-875e-eca4d1b2f19b name=/runtime.v1.RuntimeService/StartContainer
	Aug 15 17:22:21 ha-896691 crio[681]: time="2024-08-15 17:22:21.458016523Z" level=info msg="Started container" PID=2037 containerID=7a6ee8f1f6ba74f9237852ddefb98cc346f7f75942da852b4e9487a38499e02c description=kube-system/kube-apiserver-ha-896691/kube-apiserver id=ea988748-9b14-4f80-875e-eca4d1b2f19b name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3a089f1cd41ef6308bd77426f05626454990424f2f0b560ae66147228dba3d4
	Aug 15 17:22:23 ha-896691 conmon[1020]: conmon 5c5c6eff58636c825ee5 <ninfo>: container 1054 exited with status 1
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.394724730Z" level=info msg="Checking image status: ghcr.io/kube-vip/kube-vip:v0.8.0" id=9dd878be-6e48-4f46-9d92-02600693acc7 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.394945673Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kube-vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9dd878be-6e48-4f46-9d92-02600693acc7 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.395550070Z" level=info msg="Checking image status: ghcr.io/kube-vip/kube-vip:v0.8.0" id=164d01ec-a216-4cdf-ab7b-9e8fa12a57fb name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.395695112Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kube-vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=164d01ec-a216-4cdf-ab7b-9e8fa12a57fb name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.396277019Z" level=info msg="Creating container: kube-system/kube-vip-ha-896691/kube-vip" id=54262e8c-2633-46b2-a9b5-db77ad81eb19 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.396359464Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.405836921Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/711afe9ae39ab90ccd788c0ea3775206079ef2d56474beeef5d6d8c2d6695fea/merged/etc/passwd: no such file or directory"
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.405867804Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/711afe9ae39ab90ccd788c0ea3775206079ef2d56474beeef5d6d8c2d6695fea/merged/etc/group: no such file or directory"
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.436128894Z" level=info msg="Created container 556dd6aa0efe5ad308062969b1117f387f8e3d3d2230d0b9de7e41fd30f61628: kube-system/kube-vip-ha-896691/kube-vip" id=54262e8c-2633-46b2-a9b5-db77ad81eb19 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.436736493Z" level=info msg="Starting container: 556dd6aa0efe5ad308062969b1117f387f8e3d3d2230d0b9de7e41fd30f61628" id=25f49760-0379-4c7e-b77d-31317c175e1d name=/runtime.v1.RuntimeService/StartContainer
	Aug 15 17:22:24 ha-896691 crio[681]: time="2024-08-15 17:22:24.441656160Z" level=info msg="Started container" PID=2099 containerID=556dd6aa0efe5ad308062969b1117f387f8e3d3d2230d0b9de7e41fd30f61628 description=kube-system/kube-vip-ha-896691/kube-vip id=25f49760-0379-4c7e-b77d-31317c175e1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c18f91d0bd57db6599f809135460daaf58a527b1e66006fb97c88c41ad5b873f
	Aug 15 17:22:27 ha-896691 crio[681]: time="2024-08-15 17:22:27.178173252Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=3ff49341-9f8d-459a-80f3-a07622d57cc3 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:22:27 ha-896691 crio[681]: time="2024-08-15 17:22:27.178414564Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:89437512,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=3ff49341-9f8d-459a-80f3-a07622d57cc3 name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:22:27 ha-896691 crio[681]: time="2024-08-15 17:22:27.179051917Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=4d830787-1234-4738-a4d8-8421ba363eba name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:22:27 ha-896691 crio[681]: time="2024-08-15 17:22:27.179269598Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:89437512,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=4d830787-1234-4738-a4d8-8421ba363eba name=/runtime.v1.ImageService/ImageStatus
	Aug 15 17:22:27 ha-896691 crio[681]: time="2024-08-15 17:22:27.179832422Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-896691/kube-controller-manager" id=80647c46-a5d3-4708-b556-3354d3c809a8 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 17:22:27 ha-896691 crio[681]: time="2024-08-15 17:22:27.179931308Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 15 17:22:27 ha-896691 crio[681]: time="2024-08-15 17:22:27.247297187Z" level=info msg="Created container a9ca3dd6842d1fa8f276b6b2b98628233a8cc2caee5ccb269b7d1ee46b3f1aea: kube-system/kube-controller-manager-ha-896691/kube-controller-manager" id=80647c46-a5d3-4708-b556-3354d3c809a8 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 15 17:22:27 ha-896691 crio[681]: time="2024-08-15 17:22:27.247817204Z" level=info msg="Starting container: a9ca3dd6842d1fa8f276b6b2b98628233a8cc2caee5ccb269b7d1ee46b3f1aea" id=4c182dda-adcc-4c6e-b012-07782530c905 name=/runtime.v1.RuntimeService/StartContainer
	Aug 15 17:22:27 ha-896691 crio[681]: time="2024-08-15 17:22:27.253664839Z" level=info msg="Started container" PID=2145 containerID=a9ca3dd6842d1fa8f276b6b2b98628233a8cc2caee5ccb269b7d1ee46b3f1aea description=kube-system/kube-controller-manager-ha-896691/kube-controller-manager id=4c182dda-adcc-4c6e-b012-07782530c905 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3fe816a80c972b47c083855781299ca9c477c30df4bdace213896ba771c20788
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a9ca3dd6842d1       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Running             kube-controller-manager   4                   3fe816a80c972       kube-controller-manager-ha-896691
	556dd6aa0efe5       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   About a minute ago   Running             kube-vip                  1                   c18f91d0bd57d       kube-vip-ha-896691
	7a6ee8f1f6ba7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Running             kube-apiserver            2                   e3a089f1cd41e       kube-apiserver-ha-896691
	3c6a3a894b83f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Running             storage-provisioner       2                   f2df40a64795e       storage-provisioner
	abf129a8200bb       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Exited              kube-controller-manager   3                   3fe816a80c972       kube-controller-manager-ha-896691
	0dc6e7671b144       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 minutes ago        Running             coredns                   1                   e0b08fc5c4d17       coredns-6f6b679f8f-w6rw2
	8bd9e61bf0ec6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 minutes ago        Running             coredns                   1                   a84a5c1fb2602       coredns-6f6b679f8f-lmnsh
	2d28858dd11c3       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   2 minutes ago        Running             kube-proxy                1                   863a61dc869fc       kube-proxy-9m9tc
	97544b4e48bb2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   2 minutes ago        Running             busybox                   1                   b119c883c2ef2       busybox-7dff88458-9gjdc
	36e1d177ffef3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 minutes ago        Exited              storage-provisioner       1                   f2df40a64795e       storage-provisioner
	2ee744e720f67       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   2 minutes ago        Running             kindnet-cni               1                   8862430aade8c       kindnet-9jffh
	bf8a64fe45d07       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   2 minutes ago        Running             kube-scheduler            1                   a76d8ef588f45       kube-scheduler-ha-896691
	69077708e5d33       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   2 minutes ago        Exited              kube-apiserver            1                   e3a089f1cd41e       kube-apiserver-ha-896691
	5c5c6eff58636       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   2 minutes ago        Exited              kube-vip                  0                   c18f91d0bd57d       kube-vip-ha-896691
	e1b8338c11ea2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   2 minutes ago        Running             etcd                      1                   21930eb400acb       etcd-ha-896691
	
	
	==> coredns [0dc6e7671b1443908c7b415336f9511decfeea7b00b7137b340fa476a8623c94] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60961 - 47194 "HINFO IN 6095855352563032875.5708676220711232301. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017743343s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1285753560]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 17:21:39.896) (total time: 30001ms):
	Trace[1285753560]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:22:09.897)
	Trace[1285753560]: [30.001079274s] [30.001079274s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1953167218]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 17:21:39.896) (total time: 30001ms):
	Trace[1953167218]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:22:09.897)
	Trace[1953167218]: [30.001160383s] [30.001160383s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[750568197]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 17:21:39.896) (total time: 30001ms):
	Trace[750568197]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:22:09.897)
	Trace[750568197]: [30.001246116s] [30.001246116s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [8bd9e61bf0ec6427516e8bae2d4922f672d811281a3a3f0a71b193a1afc66317] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41200 - 21686 "HINFO IN 4990875985559744206.8527601261381429344. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03279353s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[379415939]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 17:21:39.887) (total time: 30000ms):
	Trace[379415939]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:22:09.888)
	Trace[379415939]: [30.000702473s] [30.000702473s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1719886312]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 17:21:39.888) (total time: 30000ms):
	Trace[1719886312]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:22:09.888)
	Trace[1719886312]: [30.000381588s] [30.000381588s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[719766757]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 17:21:39.888) (total time: 30000ms):
	Trace[719766757]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:22:09.888)
	Trace[719766757]: [30.000337871s] [30.000337871s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-896691
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-896691
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-896691
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_17_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:17:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-896691
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:23:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 17:21:39 +0000   Thu, 15 Aug 2024 17:23:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 17:21:39 +0000   Thu, 15 Aug 2024 17:23:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 17:21:39 +0000   Thu, 15 Aug 2024 17:23:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 17:21:39 +0000   Thu, 15 Aug 2024 17:23:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-896691
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7fedae12b5a42258b59fce04fd32492
	  System UUID:                07453bc8-f4f3-4878-bf15-c6091d66ae10
	  Boot ID:                    2d86d768-5fa6-4bed-a8b9-fa4131d6b0e8
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9gjdc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 coredns-6f6b679f8f-lmnsh             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m3s
	  kube-system                 coredns-6f6b679f8f-w6rw2             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m3s
	  kube-system                 etcd-ha-896691                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m7s
	  kube-system                 kindnet-9jffh                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-ha-896691             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-ha-896691    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-proxy-9m9tc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-ha-896691             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-vip-ha-896691                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m3s                   kube-proxy       
	  Normal   Starting                 6m2s                   kube-proxy       
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node ha-896691 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node ha-896691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node ha-896691 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           6m4s                   node-controller  Node ha-896691 event: Registered Node ha-896691 in Controller
	  Normal   NodeReady                5m49s                  kubelet          Node ha-896691 status is now: NodeReady
	  Normal   RegisteredNode           5m39s                  node-controller  Node ha-896691 event: Registered Node ha-896691 in Controller
	  Normal   RegisteredNode           5m6s                   node-controller  Node ha-896691 event: Registered Node ha-896691 in Controller
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-896691 event: Registered Node ha-896691 in Controller
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node ha-896691 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node ha-896691 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x7 over 2m32s)  kubelet          Node ha-896691 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m                     node-controller  Node ha-896691 event: Registered Node ha-896691 in Controller
	  Normal   RegisteredNode           74s                    node-controller  Node ha-896691 event: Registered Node ha-896691 in Controller
	  Normal   RegisteredNode           48s                    node-controller  Node ha-896691 event: Registered Node ha-896691 in Controller
	  Normal   NodeNotReady             40s                    node-controller  Node ha-896691 status is now: NodeNotReady
	
	
	Name:               ha-896691-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-896691-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-896691
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_17_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-896691-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:23:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:21:41 +0000   Thu, 15 Aug 2024 17:17:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:21:41 +0000   Thu, 15 Aug 2024 17:17:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:21:41 +0000   Thu, 15 Aug 2024 17:17:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:21:41 +0000   Thu, 15 Aug 2024 17:18:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-896691-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4017281ab7f4fa9accc334183edf1d9
	  System UUID:                2826fe2e-790c-4890-855b-4269674c2976
	  Boot ID:                    2d86d768-5fa6-4bed-a8b9-fa4131d6b0e8
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j8hkb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 etcd-ha-896691-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m46s
	  kube-system                 kindnet-qklml                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m48s
	  kube-system                 kube-apiserver-ha-896691-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-controller-manager-ha-896691-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-proxy-74b2m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-scheduler-ha-896691-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-vip-ha-896691-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 113s                   kube-proxy       
	  Normal   Starting                 3m3s                   kube-proxy       
	  Normal   Starting                 5m44s                  kube-proxy       
	  Normal   NodeHasSufficientPID     5m48s (x7 over 5m48s)  kubelet          Node ha-896691-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m48s (x8 over 5m48s)  kubelet          Node ha-896691-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m48s (x8 over 5m48s)  kubelet          Node ha-896691-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           5m45s                  node-controller  Node ha-896691-m02 event: Registered Node ha-896691-m02 in Controller
	  Normal   RegisteredNode           5m40s                  node-controller  Node ha-896691-m02 event: Registered Node ha-896691-m02 in Controller
	  Normal   RegisteredNode           5m7s                   node-controller  Node ha-896691-m02 event: Registered Node ha-896691-m02 in Controller
	  Normal   NodeHasSufficientPID     3m41s (x7 over 3m41s)  kubelet          Node ha-896691-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m41s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m41s (x8 over 3m41s)  kubelet          Node ha-896691-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m41s (x8 over 3m41s)  kubelet          Node ha-896691-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-896691-m02 event: Registered Node ha-896691-m02 in Controller
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node ha-896691-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node ha-896691-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x7 over 2m32s)  kubelet          Node ha-896691-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m1s                   node-controller  Node ha-896691-m02 event: Registered Node ha-896691-m02 in Controller
	  Normal   RegisteredNode           75s                    node-controller  Node ha-896691-m02 event: Registered Node ha-896691-m02 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-896691-m02 event: Registered Node ha-896691-m02 in Controller
	
	
	Name:               ha-896691-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-896691-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-896691
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_19_11_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:19:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-896691-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:23:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:23:43 +0000   Thu, 15 Aug 2024 17:23:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:23:43 +0000   Thu, 15 Aug 2024 17:23:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:23:43 +0000   Thu, 15 Aug 2024 17:23:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:23:43 +0000   Thu, 15 Aug 2024 17:23:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-896691-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7311f5b2ee1a48f1bab223cca05e5f7c
	  System UUID:                ed9d5392-8d8f-43c9-8503-49af9403b0cf
	  Boot ID:                    2d86d768-5fa6-4bed-a8b9-fa4131d6b0e8
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dzkvl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kindnet-8k6qn              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m33s
	  kube-system                 kube-proxy-g4qhb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16s                    kube-proxy       
	  Normal   Starting                 4m30s                  kube-proxy       
	  Normal   NodeHasSufficientPID     4m33s (x2 over 4m33s)  kubelet          Node ha-896691-m04 status is now: NodeHasSufficientPID
	  Normal   CIDRAssignmentFailed     4m33s                  cidrAllocator    Node ha-896691-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  4m33s (x2 over 4m33s)  kubelet          Node ha-896691-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m33s (x2 over 4m33s)  kubelet          Node ha-896691-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           4m32s                  node-controller  Node ha-896691-m04 event: Registered Node ha-896691-m04 in Controller
	  Normal   RegisteredNode           4m30s                  node-controller  Node ha-896691-m04 event: Registered Node ha-896691-m04 in Controller
	  Normal   RegisteredNode           4m30s                  node-controller  Node ha-896691-m04 event: Registered Node ha-896691-m04 in Controller
	  Normal   NodeReady                4m18s                  kubelet          Node ha-896691-m04 status is now: NodeReady
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-896691-m04 event: Registered Node ha-896691-m04 in Controller
	  Normal   RegisteredNode           2m1s                   node-controller  Node ha-896691-m04 event: Registered Node ha-896691-m04 in Controller
	  Normal   NodeNotReady             81s                    node-controller  Node ha-896691-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           75s                    node-controller  Node ha-896691-m04 event: Registered Node ha-896691-m04 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-896691-m04 event: Registered Node ha-896691-m04 in Controller
	  Normal   Starting                 34s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     28s (x7 over 34s)      kubelet          Node ha-896691-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  21s (x8 over 34s)      kubelet          Node ha-896691-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 34s)      kubelet          Node ha-896691-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[  +1.007273] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f725c910872f
	[  +0.000006] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f725c910872f
	[  +0.000006] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f725c910872f
	[  +0.000002] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f725c910872f
	[  +0.000004] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +6.207542] net_ratelimit: 6 callbacks suppressed
	[  +0.000028] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f725c910872f
	[  +0.000005] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +0.003936] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f725c910872f
	[  +0.000005] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +8.187423] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f725c910872f
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f725c910872f
	[  +0.000004] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f725c910872f
	[  +0.000002] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +0.003990] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f725c910872f
	[  +0.000004] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f725c910872f
	[  +0.000005] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-f725c910872f
	[  +0.000002] ll header: 00000000: 02 42 3d 74 76 e0 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [e1b8338c11ea2bab2940bf1554021f8aecdcd42ff3f2185fd670e2195c8f45b8] <==
	{"level":"info","ts":"2024-08-15T17:22:47.645964Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f35085faab8896f6"}
	{"level":"info","ts":"2024-08-15T17:22:47.648653Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"f35085faab8896f6","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T17:22:47.648688Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f35085faab8896f6"}
	{"level":"warn","ts":"2024-08-15T17:22:47.661584Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:43548","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-15T17:22:47.676564Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f35085faab8896f6"}
	{"level":"info","ts":"2024-08-15T17:22:47.685905Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f35085faab8896f6"}
	{"level":"warn","ts":"2024-08-15T17:22:48.076588Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f35085faab8896f6","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:22:48.076615Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f35085faab8896f6","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-15T17:23:34.442613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 13304922652180354055)"}
	{"level":"info","ts":"2024-08-15T17:23:34.443943Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"f35085faab8896f6","removed-remote-peer-urls":["https://192.168.49.4:2380"]}
	{"level":"info","ts":"2024-08-15T17:23:34.443983Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f35085faab8896f6"}
	{"level":"warn","ts":"2024-08-15T17:23:34.445927Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f35085faab8896f6"}
	{"level":"info","ts":"2024-08-15T17:23:34.445960Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f35085faab8896f6"}
	{"level":"warn","ts":"2024-08-15T17:23:34.446086Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f35085faab8896f6"}
	{"level":"info","ts":"2024-08-15T17:23:34.446105Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f35085faab8896f6"}
	{"level":"info","ts":"2024-08-15T17:23:34.446139Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"f35085faab8896f6"}
	{"level":"warn","ts":"2024-08-15T17:23:34.446249Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f35085faab8896f6","error":"context canceled"}
	{"level":"warn","ts":"2024-08-15T17:23:34.446273Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"f35085faab8896f6","error":"failed to read f35085faab8896f6 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-15T17:23:34.446287Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f35085faab8896f6"}
	{"level":"warn","ts":"2024-08-15T17:23:34.446354Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f35085faab8896f6","error":"context canceled"}
	{"level":"info","ts":"2024-08-15T17:23:34.446375Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f35085faab8896f6"}
	{"level":"info","ts":"2024-08-15T17:23:34.446394Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f35085faab8896f6"}
	{"level":"info","ts":"2024-08-15T17:23:34.446413Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"f35085faab8896f6"}
	{"level":"warn","ts":"2024-08-15T17:23:34.452144Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"f35085faab8896f6"}
	{"level":"warn","ts":"2024-08-15T17:23:34.453503Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:60826","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:23:44 up  2:06,  0 users,  load average: 1.05, 1.07, 0.76
	Linux ha-896691 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2ee744e720f672f8905c316c64ff8f45edb2a4711394a10ddff3d7312fbeab42] <==
	I0815 17:23:20.173867       1 main.go:322] Node ha-896691-m04 has CIDR [10.244.4.0/24] 
	I0815 17:23:20.174017       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:23:20.174034       1 main.go:299] handling current node
	I0815 17:23:20.174046       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0815 17:23:20.174058       1 main.go:322] Node ha-896691-m02 has CIDR [10.244.1.0/24] 
	I0815 17:23:20.174113       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0815 17:23:20.174121       1 main.go:322] Node ha-896691-m03 has CIDR [10.244.3.0/24] 
	I0815 17:23:30.173875       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:23:30.173916       1 main.go:299] handling current node
	I0815 17:23:30.173953       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0815 17:23:30.173961       1 main.go:322] Node ha-896691-m02 has CIDR [10.244.1.0/24] 
	I0815 17:23:30.174071       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0815 17:23:30.174080       1 main.go:322] Node ha-896691-m03 has CIDR [10.244.3.0/24] 
	I0815 17:23:30.174115       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0815 17:23:30.174121       1 main.go:322] Node ha-896691-m04 has CIDR [10.244.4.0/24] 
	W0815 17:23:33.519700       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 17:23:33.519747       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0815 17:23:35.740476       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:23:35.740505       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 17:23:40.173722       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:23:40.173756       1 main.go:299] handling current node
	I0815 17:23:40.173771       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0815 17:23:40.173776       1 main.go:322] Node ha-896691-m02 has CIDR [10.244.1.0/24] 
	I0815 17:23:40.173897       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0815 17:23:40.173907       1 main.go:322] Node ha-896691-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [69077708e5d33e4a1ca85e167f07beb88227b48d3199a4525b203bf3b863df18] <==
	W0815 17:21:37.805220       1 reflector.go:561] storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions: failed to list *apiextensions.CustomResourceDefinition: etcdserver: leader changed
	E0815 17:21:37.805247       1 cacher.go:478] cacher (customresourcedefinitions.apiextensions.k8s.io): unexpected ListAndWatch error: failed to list *apiextensions.CustomResourceDefinition: etcdserver: leader changed; reinitializing...
	I0815 17:21:37.833967       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 17:21:37.882256       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 17:21:37.894770       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 17:21:37.894796       1 policy_source.go:224] refreshing policies
	I0815 17:21:37.903247       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 17:21:37.903384       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 17:21:37.903552       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 17:21:37.903393       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 17:21:37.904495       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 17:21:37.904578       1 aggregator.go:171] initial CRD sync complete...
	I0815 17:21:37.904595       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 17:21:37.904606       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 17:21:37.904616       1 cache.go:39] Caches are synced for autoregister controller
	I0815 17:21:37.904641       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 17:21:37.904652       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 17:21:37.909131       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0815 17:21:37.910294       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0815 17:21:37.911436       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 17:21:37.915859       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 17:21:37.917658       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 17:21:37.934289       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 17:21:38.003722       1 shared_informer.go:320] Caches are synced for configmaps
	F0815 17:22:20.302718       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [7a6ee8f1f6ba74f9237852ddefb98cc346f7f75942da852b4e9487a38499e02c] <==
	I0815 17:22:22.885417       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0815 17:22:22.885435       1 controller.go:78] Starting OpenAPI AggregationController
	I0815 17:22:22.886463       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 17:22:22.886558       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 17:22:22.972228       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 17:22:22.972468       1 policy_source.go:224] refreshing policies
	I0815 17:22:23.054056       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 17:22:23.054088       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 17:22:23.054267       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 17:22:23.054631       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 17:22:23.054845       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 17:22:23.055092       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 17:22:23.055234       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 17:22:23.055277       1 aggregator.go:171] initial CRD sync complete...
	I0815 17:22:23.055293       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 17:22:23.055300       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 17:22:23.055306       1 cache.go:39] Caches are synced for autoregister controller
	I0815 17:22:23.055534       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 17:22:23.055963       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 17:22:23.064537       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 17:22:23.066158       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 17:22:23.889249       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 17:22:24.170221       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0815 17:22:24.171603       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 17:22:24.176932       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a9ca3dd6842d1fa8f276b6b2b98628233a8cc2caee5ccb269b7d1ee46b3f1aea] <==
	I0815 17:23:21.280857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.1688ms"
	I0815 17:23:21.280952       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.88µs"
	I0815 17:23:21.305844       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-2ndrq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-2ndrq\": the object has been modified; please apply your changes to the latest version and try again"
	I0815 17:23:21.306189       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"da068a4d-c584-4345-8f4e-61137531838c", APIVersion:"v1", ResourceVersion:"255", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-2ndrq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-2ndrq": the object has been modified; please apply your changes to the latest version and try again
	I0815 17:23:21.332311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="37.465604ms"
	I0815 17:23:21.332444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="85.68µs"
	I0815 17:23:23.109272       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-896691-m04"
	I0815 17:23:23.109872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896691-m04"
	I0815 17:23:23.122507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896691-m04"
	I0815 17:23:23.977755       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896691-m04"
	I0815 17:23:31.191006       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896691-m03"
	I0815 17:23:31.206896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896691-m03"
	I0815 17:23:31.254605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.143153ms"
	I0815 17:23:31.319114       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.455134ms"
	I0815 17:23:31.319235       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.068µs"
	I0815 17:23:31.326243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.830462ms"
	I0815 17:23:31.326895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.905µs"
	I0815 17:23:32.423224       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.849014ms"
	I0815 17:23:32.423328       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.524µs"
	I0815 17:23:33.407746       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.55µs"
	I0815 17:23:34.114522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.576µs"
	I0815 17:23:34.120783       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.078µs"
	I0815 17:23:38.354481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896691-m03"
	I0815 17:23:38.354538       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-896691-m04"
	I0815 17:23:43.832437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896691-m04"
	
	
	==> kube-controller-manager [abf129a8200bbc59011dcbe6c7979d0d23a40d36d919925e0ec77b05e4cc80f8] <==
	I0815 17:21:56.623738       1 serving.go:386] Generated self-signed cert in-memory
	I0815 17:21:57.146233       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 17:21:57.146256       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:21:57.147465       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 17:21:57.147465       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 17:21:57.147709       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 17:21:57.147807       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 17:22:07.158979       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [2d28858dd11c34d8db1ee6e5f954df4328c7a2e9faa23ce6018fdd37ec5ced3d] <==
	I0815 17:21:39.896097       1 server_linux.go:66] "Using iptables proxy"
	I0815 17:21:40.017255       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0815 17:21:40.017315       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:21:40.034840       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0815 17:21:40.034891       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:21:40.036860       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:21:40.037258       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:21:40.037286       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:21:40.038402       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:21:40.038451       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:21:40.038511       1 config.go:197] "Starting service config controller"
	I0815 17:21:40.038579       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:21:40.038528       1 config.go:326] "Starting node config controller"
	I0815 17:21:40.038613       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:21:40.139317       1 shared_informer.go:320] Caches are synced for node config
	I0815 17:21:40.139342       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:21:40.139333       1 shared_informer.go:320] Caches are synced for endpoint slice config
	W0815 17:23:07.980737       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1453": http2: client connection lost
	W0815 17:23:07.980738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-896691&resourceVersion=1461": http2: client connection lost
	E0815 17:23:07.980856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-896691&resourceVersion=1461\": http2: client connection lost" logger="UnhandledError"
	E0815 17:23:07.980848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1453\": http2: client connection lost" logger="UnhandledError"
	W0815 17:23:07.980738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1773": http2: client connection lost
	E0815 17:23:07.980893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1773\": http2: client connection lost" logger="UnhandledError"
	
	
	==> kube-scheduler [bf8a64fe45d074858d5cd66383d8bd47a534a2d69c5f5d25923fb653e16a4c02] <==
	W0815 17:21:29.665067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 17:21:29.665110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:21:29.727011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 17:21:29.727048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:21:30.052298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 17:21:30.052336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:21:30.113326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:21:30.113376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:21:34.593486       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 17:21:34.593523       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 17:21:35.765653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 17:21:35.765701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:21:36.041998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 17:21:36.042052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:21:37.004772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 17:21:37.004811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:21:37.295100       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 17:21:37.295138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:21:37.596216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 17:21:37.596264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0815 17:21:48.381595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 17:23:31.269460       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-dzkvl\": pod busybox-7dff88458-dzkvl is already assigned to node \"ha-896691-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-dzkvl" node="ha-896691-m04"
	E0815 17:23:31.269554       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b4395e46-563c-4209-95e6-e246f1b39c61(default/busybox-7dff88458-dzkvl) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-dzkvl"
	E0815 17:23:31.269583       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-dzkvl\": pod busybox-7dff88458-dzkvl is already assigned to node \"ha-896691-m04\"" pod="default/busybox-7dff88458-dzkvl"
	I0815 17:23:31.269613       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-dzkvl" node="ha-896691-m04"
	
	
	==> kubelet <==
	Aug 15 17:23:08 ha-896691 kubelet[836]: I0815 17:23:08.061551     836 status_manager.go:851] "Failed to get status for pod" podUID="0a62c7b694611e16230e2750b1ebbbf7" pod="kube-system/kube-vip-ha-896691" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-896691\": http2: client connection lost"
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061603     836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1453&timeout=8m27s&timeoutSeconds=507&watch=true\": http2: client connection lost" logger="UnhandledError"
	Aug 15 17:23:08 ha-896691 kubelet[836]: W0815 17:23:08.061607     836 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-896691&resourceVersion=1770": http2: client connection lost
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061669     836 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-896691&resourceVersion=1770\": http2: client connection lost" logger="UnhandledError"
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061672     836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?allowWatchBookmarks=true&resourceVersion=1453&timeout=9m49s&timeoutSeconds=589&watch=true\": http2: client connection lost" logger="UnhandledError"
	Aug 15 17:23:08 ha-896691 kubelet[836]: W0815 17:23:08.061621     836 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1453": http2: client connection lost
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061701     836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1453&timeout=8m19s&timeoutSeconds=499&watch=true\": http2: client connection lost" logger="UnhandledError"
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061715     836 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1453\": http2: client connection lost" logger="UnhandledError"
	Aug 15 17:23:08 ha-896691 kubelet[836]: W0815 17:23:08.061677     836 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1453": http2: client connection lost
	Aug 15 17:23:08 ha-896691 kubelet[836]: W0815 17:23:08.061742     836 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1453": http2: client connection lost
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061773     836 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1453\": http2: client connection lost" logger="UnhandledError"
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061632     836 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-896691.17ebf6a68f7e69cb\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-896691.17ebf6a68f7e69cb  kube-system   1531 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-896691,UID:4e6ddbc64ff79588401fdb2aaddb0707,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.0\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-896691,},FirstTimestamp:2024-08-15 17:21:17 +0000 UTC,LastTimestamp:2024-08-15 17:22:21.386146658 +0000 UTC m=+70.285878304,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Act
ion:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-896691,}"
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061767     836 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1453\": http2: client connection lost" logger="UnhandledError"
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061721     836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dha-896691&resourceVersion=1461&timeout=9m16s&timeoutSeconds=556&watch=true\": http2: client connection lost" logger="UnhandledError"
	Aug 15 17:23:08 ha-896691 kubelet[836]: W0815 17:23:08.061741     836 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1453": http2: client connection lost
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061669     836 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-896691?timeout=10s\": http2: client connection lost"
	Aug 15 17:23:08 ha-896691 kubelet[836]: E0815 17:23:08.061822     836 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1453\": http2: client connection lost" logger="UnhandledError"
	Aug 15 17:23:11 ha-896691 kubelet[836]: E0815 17:23:11.199432     836 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742591199226438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:23:11 ha-896691 kubelet[836]: E0815 17:23:11.199467     836 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742591199226438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:23:21 ha-896691 kubelet[836]: E0815 17:23:21.201589     836 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742601200988141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:23:21 ha-896691 kubelet[836]: E0815 17:23:21.201627     836 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742601200988141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:23:31 ha-896691 kubelet[836]: E0815 17:23:31.203047     836 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742611202834963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:23:31 ha-896691 kubelet[836]: E0815 17:23:31.203087     836 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742611202834963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:23:41 ha-896691 kubelet[836]: E0815 17:23:41.203989     836 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742621203805939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:23:41 ha-896691 kubelet[836]: E0815 17:23:41.204037     836 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742621203805939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-896691 -n ha-896691
helpers_test.go:261: (dbg) Run:  kubectl --context ha-896691 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (14.48s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.26
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 5.07
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.19
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.05
21 TestBinaryMirror 0.73
22 TestOffline 57.23
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 142.87
31 TestAddons/serial/GCPAuth/Namespaces 0.12
33 TestAddons/parallel/Registry 13.8
35 TestAddons/parallel/InspektorGadget 11.69
37 TestAddons/parallel/HelmTiller 10.11
39 TestAddons/parallel/CSI 56.39
40 TestAddons/parallel/Headlamp 17.36
41 TestAddons/parallel/CloudSpanner 5.8
42 TestAddons/parallel/LocalPath 52.93
43 TestAddons/parallel/NvidiaDevicePlugin 6.76
44 TestAddons/parallel/Yakd 11.62
45 TestAddons/StoppedEnableDisable 12.05
46 TestCertOptions 22.08
47 TestCertExpiration 224.28
49 TestForceSystemdFlag 25.96
50 TestForceSystemdEnv 36.2
52 TestKVMDriverInstallOrUpdate 3.52
56 TestErrorSpam/setup 20.05
57 TestErrorSpam/start 0.54
58 TestErrorSpam/status 0.83
59 TestErrorSpam/pause 1.45
60 TestErrorSpam/unpause 1.6
61 TestErrorSpam/stop 1.33
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 40.21
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 26.22
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.06
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.07
73 TestFunctional/serial/CacheCmd/cache/add_local 1.32
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 41.32
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.31
84 TestFunctional/serial/LogsFileCmd 1.31
85 TestFunctional/serial/InvalidService 4.2
87 TestFunctional/parallel/ConfigCmd 0.33
88 TestFunctional/parallel/DashboardCmd 10.36
89 TestFunctional/parallel/DryRun 0.33
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.87
95 TestFunctional/parallel/ServiceCmdConnect 9.64
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 26.47
99 TestFunctional/parallel/SSHCmd 0.64
100 TestFunctional/parallel/CpCmd 1.76
101 TestFunctional/parallel/MySQL 24.17
102 TestFunctional/parallel/FileSync 0.36
103 TestFunctional/parallel/CertSync 1.94
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
111 TestFunctional/parallel/License 0.16
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
116 TestFunctional/parallel/ImageCommands/ImageBuild 2.09
117 TestFunctional/parallel/ImageCommands/Setup 1.17
118 TestFunctional/parallel/Version/short 0.05
119 TestFunctional/parallel/Version/components 0.49
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.9
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
122 TestFunctional/parallel/ServiceCmd/DeployApp 8.16
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.78
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.9
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.92
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
128 TestFunctional/parallel/ProfileCmd/profile_list 0.36
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
130 TestFunctional/parallel/MountCmd/any-port 6.54
131 TestFunctional/parallel/ServiceCmd/List 0.61
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
138 TestFunctional/parallel/ServiceCmd/Format 0.4
139 TestFunctional/parallel/ServiceCmd/URL 0.32
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.38
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.18
145 TestFunctional/parallel/MountCmd/specific-port 2.05
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.42
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 100.9
160 TestMultiControlPlane/serial/DeployApp 3.58
161 TestMultiControlPlane/serial/PingHostFromPods 0.99
162 TestMultiControlPlane/serial/AddWorkerNode 34.98
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.64
165 TestMultiControlPlane/serial/CopyFile 15.43
166 TestMultiControlPlane/serial/StopSecondaryNode 12.44
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.46
168 TestMultiControlPlane/serial/RestartSecondaryNode 22.63
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.8
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 182.44
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.45
173 TestMultiControlPlane/serial/StopCluster 35.39
174 TestMultiControlPlane/serial/RestartCluster 49.75
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.45
176 TestMultiControlPlane/serial/AddSecondaryNode 53.14
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.63
181 TestJSONOutput/start/Command 40.06
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.67
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.57
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.76
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.19
206 TestKicCustomNetwork/create_custom_network 28.07
207 TestKicCustomNetwork/use_default_bridge_network 23.02
208 TestKicExistingNetwork 21.94
209 TestKicCustomSubnet 23.33
210 TestKicStaticIP 25.24
211 TestMainNoArgs 0.04
212 TestMinikubeProfile 49.07
215 TestMountStart/serial/StartWithMountFirst 5.33
216 TestMountStart/serial/VerifyMountFirst 0.24
217 TestMountStart/serial/StartWithMountSecond 5.49
218 TestMountStart/serial/VerifyMountSecond 0.23
219 TestMountStart/serial/DeleteFirst 1.58
220 TestMountStart/serial/VerifyMountPostDelete 0.23
221 TestMountStart/serial/Stop 1.16
222 TestMountStart/serial/RestartStopped 7.18
223 TestMountStart/serial/VerifyMountPostStop 0.24
226 TestMultiNode/serial/FreshStart2Nodes 71.67
227 TestMultiNode/serial/DeployApp2Nodes 3.1
228 TestMultiNode/serial/PingHostFrom2Pods 0.7
229 TestMultiNode/serial/AddNode 28.33
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.27
232 TestMultiNode/serial/CopyFile 8.76
233 TestMultiNode/serial/StopNode 2.06
234 TestMultiNode/serial/StartAfterStop 8.92
235 TestMultiNode/serial/RestartKeepsNodes 93.63
236 TestMultiNode/serial/DeleteNode 5.2
237 TestMultiNode/serial/StopMultiNode 23.73
238 TestMultiNode/serial/RestartMultiNode 49.42
239 TestMultiNode/serial/ValidateNameConflict 22.18
244 TestPreload 119.44
246 TestScheduledStopUnix 95.71
249 TestInsufficientStorage 9.41
250 TestRunningBinaryUpgrade 59.4
252 TestKubernetesUpgrade 344.74
253 TestMissingContainerUpgrade 113.72
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestNoKubernetes/serial/StartWithK8s 30.11
264 TestNetworkPlugins/group/false 7.76
268 TestStoppedBinaryUpgrade/Setup 0.67
269 TestStoppedBinaryUpgrade/Upgrade 94.52
270 TestNoKubernetes/serial/StartWithStopK8s 11.58
271 TestNoKubernetes/serial/Start 5.42
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
273 TestNoKubernetes/serial/ProfileList 0.85
274 TestNoKubernetes/serial/Stop 1.21
275 TestNoKubernetes/serial/StartNoArgs 8.75
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
286 TestPause/serial/Start 49.67
287 TestPause/serial/SecondStartNoReconfiguration 31.05
288 TestNetworkPlugins/group/auto/Start 46.77
289 TestPause/serial/Pause 0.88
290 TestPause/serial/VerifyStatus 0.28
291 TestPause/serial/Unpause 0.93
292 TestPause/serial/PauseAgain 0.89
293 TestPause/serial/DeletePaused 2.64
294 TestPause/serial/VerifyDeletedResources 0.51
295 TestNetworkPlugins/group/kindnet/Start 42.59
296 TestNetworkPlugins/group/auto/KubeletFlags 0.25
297 TestNetworkPlugins/group/auto/NetCatPod 10.17
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/auto/DNS 0.12
300 TestNetworkPlugins/group/auto/Localhost 0.11
301 TestNetworkPlugins/group/auto/HairPin 0.11
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
303 TestNetworkPlugins/group/kindnet/NetCatPod 10.16
304 TestNetworkPlugins/group/kindnet/DNS 0.14
305 TestNetworkPlugins/group/kindnet/Localhost 0.1
306 TestNetworkPlugins/group/kindnet/HairPin 0.11
307 TestNetworkPlugins/group/calico/Start 56.2
308 TestNetworkPlugins/group/custom-flannel/Start 45.85
309 TestNetworkPlugins/group/calico/ControllerPod 5.04
310 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
311 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.18
312 TestNetworkPlugins/group/calico/KubeletFlags 0.41
313 TestNetworkPlugins/group/calico/NetCatPod 9.48
314 TestNetworkPlugins/group/custom-flannel/DNS 0.13
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
317 TestNetworkPlugins/group/calico/DNS 0.15
318 TestNetworkPlugins/group/calico/Localhost 0.11
319 TestNetworkPlugins/group/calico/HairPin 0.11
320 TestNetworkPlugins/group/enable-default-cni/Start 38.08
321 TestNetworkPlugins/group/flannel/Start 49.89
322 TestNetworkPlugins/group/bridge/Start 39.77
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.19
325 TestNetworkPlugins/group/enable-default-cni/DNS 20.93
327 TestStartStop/group/old-k8s-version/serial/FirstStart 107.09
328 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
329 TestNetworkPlugins/group/bridge/NetCatPod 11.18
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestNetworkPlugins/group/bridge/DNS 0.14
332 TestNetworkPlugins/group/bridge/Localhost 0.12
333 TestNetworkPlugins/group/bridge/HairPin 0.12
334 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
335 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
336 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
337 TestNetworkPlugins/group/flannel/NetCatPod 9.17
338 TestNetworkPlugins/group/flannel/DNS 0.14
339 TestNetworkPlugins/group/flannel/Localhost 0.17
340 TestNetworkPlugins/group/flannel/HairPin 0.13
342 TestStartStop/group/embed-certs/serial/FirstStart 48.39
344 TestStartStop/group/no-preload/serial/FirstStart 61.6
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.19
347 TestStartStop/group/embed-certs/serial/DeployApp 7.25
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.84
349 TestStartStop/group/embed-certs/serial/Stop 11.9
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
351 TestStartStop/group/no-preload/serial/DeployApp 7.22
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.81
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.8
354 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
355 TestStartStop/group/embed-certs/serial/SecondStart 261.7
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.81
357 TestStartStop/group/no-preload/serial/Stop 11.82
358 TestStartStop/group/old-k8s-version/serial/DeployApp 8.37
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 271.31
361 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.84
362 TestStartStop/group/old-k8s-version/serial/Stop 12
363 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
364 TestStartStop/group/no-preload/serial/SecondStart 298.82
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
366 TestStartStop/group/old-k8s-version/serial/SecondStart 144.7
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
368 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
369 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
370 TestStartStop/group/old-k8s-version/serial/Pause 2.44
372 TestStartStop/group/newest-cni/serial/FirstStart 26.46
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
375 TestStartStop/group/newest-cni/serial/Stop 1.19
376 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
377 TestStartStop/group/newest-cni/serial/SecondStart 13.02
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
381 TestStartStop/group/newest-cni/serial/Pause 2.77
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
385 TestStartStop/group/embed-certs/serial/Pause 2.64
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
389 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.47
390 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
392 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
393 TestStartStop/group/no-preload/serial/Pause 2.46
x
+
TestDownloadOnly/v1.20.0/json-events (6.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-201245 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-201245 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.258562097s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-201245
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-201245: exit status 85 (59.036634ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-201245 | jenkins | v1.33.1 | 15 Aug 24 17:04 UTC |          |
	|         | -p download-only-201245        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:04:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:04:54.574853  384103 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:04:54.575095  384103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:54.575145  384103 out.go:358] Setting ErrFile to fd 2...
	I0815 17:04:54.575186  384103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:04:54.575578  384103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	W0815 17:04:54.575711  384103 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19450-377193/.minikube/config/config.json: open /home/jenkins/minikube-integration/19450-377193/.minikube/config/config.json: no such file or directory
	I0815 17:04:54.576278  384103 out.go:352] Setting JSON to true
	I0815 17:04:54.577239  384103 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6447,"bootTime":1723735048,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:04:54.577295  384103 start.go:139] virtualization: kvm guest
	I0815 17:04:54.579672  384103 out.go:97] [download-only-201245] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0815 17:04:54.579764  384103 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 17:04:54.579822  384103 notify.go:220] Checking for updates...
	I0815 17:04:54.581087  384103 out.go:169] MINIKUBE_LOCATION=19450
	I0815 17:04:54.582364  384103 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:04:54.583582  384103 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:04:54.584864  384103 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	I0815 17:04:54.586006  384103 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0815 17:04:54.588080  384103 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 17:04:54.588335  384103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:04:54.611149  384103 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:04:54.611260  384103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:04:54.655645  384103 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 17:04:54.647100288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:04:54.655774  384103 docker.go:307] overlay module found
	I0815 17:04:54.657454  384103 out.go:97] Using the docker driver based on user configuration
	I0815 17:04:54.657477  384103 start.go:297] selected driver: docker
	I0815 17:04:54.657490  384103 start.go:901] validating driver "docker" against <nil>
	I0815 17:04:54.657578  384103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:04:54.702705  384103 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 17:04:54.693913922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:04:54.702921  384103 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:04:54.703655  384103 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0815 17:04:54.703873  384103 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 17:04:54.705578  384103 out.go:169] Using Docker driver with root privileges
	I0815 17:04:54.706777  384103 cni.go:84] Creating CNI manager for ""
	I0815 17:04:54.706797  384103 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 17:04:54.706810  384103 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:04:54.706878  384103 start.go:340] cluster config:
	{Name:download-only-201245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-201245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:04:54.708290  384103 out.go:97] Starting "download-only-201245" primary control-plane node in "download-only-201245" cluster
	I0815 17:04:54.708306  384103 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 17:04:54.709477  384103 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:04:54.709498  384103 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 17:04:54.709607  384103 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:04:54.725335  384103 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:04:54.725515  384103 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:04:54.725594  384103 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:04:54.747638  384103 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:04:54.747669  384103 cache.go:56] Caching tarball of preloaded images
	I0815 17:04:54.747826  384103 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 17:04:54.749661  384103 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 17:04:54.749691  384103 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 17:04:54.796056  384103 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:04:58.173684  384103 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:04:59.314281  384103 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 17:04:59.314378  384103 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-201245 host does not exist
	  To start a cluster, run: "minikube start -p download-only-201245"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-201245
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-181785 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-181785 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.0731968s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-181785
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-181785: exit status 85 (57.639779ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-201245 | jenkins | v1.33.1 | 15 Aug 24 17:04 UTC |                     |
	|         | -p download-only-201245        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| delete  | -p download-only-201245        | download-only-201245 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| start   | -o=json --download-only        | download-only-181785 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | -p download-only-181785        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:05:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:05:01.207839  384452 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:01.207957  384452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:01.207965  384452 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:01.207969  384452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:01.208146  384452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:05:01.208707  384452 out.go:352] Setting JSON to true
	I0815 17:05:01.209577  384452 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6453,"bootTime":1723735048,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:05:01.209646  384452 start.go:139] virtualization: kvm guest
	I0815 17:05:01.211554  384452 out.go:97] [download-only-181785] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:05:01.211721  384452 notify.go:220] Checking for updates...
	I0815 17:05:01.213063  384452 out.go:169] MINIKUBE_LOCATION=19450
	I0815 17:05:01.214468  384452 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:01.215596  384452 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:05:01.216704  384452 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	I0815 17:05:01.217792  384452 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0815 17:05:01.220095  384452 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 17:05:01.220312  384452 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:01.241942  384452 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:05:01.242041  384452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:01.285714  384452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-08-15 17:05:01.277298728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:05:01.285860  384452 docker.go:307] overlay module found
	I0815 17:05:01.287465  384452 out.go:97] Using the docker driver based on user configuration
	I0815 17:05:01.287497  384452 start.go:297] selected driver: docker
	I0815 17:05:01.287514  384452 start.go:901] validating driver "docker" against <nil>
	I0815 17:05:01.287613  384452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:01.332651  384452 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-08-15 17:05:01.323296409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:05:01.332887  384452 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:05:01.333528  384452 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0815 17:05:01.333723  384452 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 17:05:01.335458  384452 out.go:169] Using Docker driver with root privileges
	I0815 17:05:01.336762  384452 cni.go:84] Creating CNI manager for ""
	I0815 17:05:01.336783  384452 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0815 17:05:01.336797  384452 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:01.336865  384452 start.go:340] cluster config:
	{Name:download-only-181785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-181785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:01.338036  384452 out.go:97] Starting "download-only-181785" primary control-plane node in "download-only-181785" cluster
	I0815 17:05:01.338056  384452 cache.go:121] Beginning downloading kic base image for docker with crio
	I0815 17:05:01.339185  384452 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:05:01.339205  384452 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:05:01.339312  384452 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:05:01.354237  384452 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:05:01.354357  384452 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:05:01.354373  384452 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:05:01.354377  384452 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:05:01.354387  384452 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:05:01.362983  384452 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:05:01.363010  384452 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:01.363168  384452 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:05:01.364864  384452 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0815 17:05:01.364884  384452 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 17:05:01.393797  384452 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:05:04.824909  384452 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 17:05:04.824997  384452 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19450-377193/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 17:05:05.557943  384452 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:05:05.558309  384452 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/download-only-181785/config.json ...
	I0815 17:05:05.558338  384452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/download-only-181785/config.json: {Name:mkebd2de908e3adca9aa21382d78d486aa7f70a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:05.558484  384452 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:05:05.558618  384452 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19450-377193/.minikube/cache/linux/amd64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-181785 host does not exist
	  To start a cluster, run: "minikube start -p download-only-181785"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-181785
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.05s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-962475 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-962475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-962475
--- PASS: TestDownloadOnlyKic (1.05s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-527485 --alsologtostderr --binary-mirror http://127.0.0.1:39117 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-527485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-527485
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (57.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-157216 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-157216 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (53.605915512s)
helpers_test.go:175: Cleaning up "offline-crio-157216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-157216
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-157216: (3.621537008s)
--- PASS: TestOffline (57.23s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-703024
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-703024: exit status 85 (49.87362ms)

                                                
                                                
-- stdout --
	* Profile "addons-703024" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-703024"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-703024
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-703024: exit status 85 (49.342371ms)

                                                
                                                
-- stdout --
	* Profile "addons-703024" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-703024"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (142.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-703024 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-703024 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m22.872310074s)
--- PASS: TestAddons/Setup (142.87s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-703024 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-703024 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.566707ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-jnqvt" [2df2b6d1-e4e8-4d1b-962b-574054625724] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002442878s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4xk99" [7672bca9-2613-4a51-b743-107bdc30df7b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003244415s
addons_test.go:342: (dbg) Run:  kubectl --context addons-703024 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-703024 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-703024 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.03520473s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 ip
2024/08/15 17:08:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.80s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xlx4w" [d7a42c66-448a-440a-8809-f95c68a49063] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003141489s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-703024
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-703024: (5.688981131s)
--- PASS: TestAddons/parallel/InspektorGadget (11.69s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.11s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.411634ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-twgzw" [f0d20030-3d71-47ce-9f44-cf4f462d6c84] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003809933s
addons_test.go:475: (dbg) Run:  kubectl --context addons-703024 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-703024 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.614219312s)
addons_test.go:480: kubectl --context addons-703024 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.216245ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-703024 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-703024 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [09de73ae-a10a-49e1-952e-26d8439b9d58] Pending
helpers_test.go:344: "task-pv-pod" [09de73ae-a10a-49e1-952e-26d8439b9d58] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [09de73ae-a10a-49e1-952e-26d8439b9d58] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004071891s
addons_test.go:590: (dbg) Run:  kubectl --context addons-703024 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-703024 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-703024 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-703024 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-703024 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-703024 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-703024 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7965a1fa-09dc-4e8b-b130-10ce9b74963d] Pending
helpers_test.go:344: "task-pv-pod-restore" [7965a1fa-09dc-4e8b-b130-10ce9b74963d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7965a1fa-09dc-4e8b-b130-10ce9b74963d] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003359841s
addons_test.go:632: (dbg) Run:  kubectl --context addons-703024 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-703024 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-703024 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-703024 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.496632333s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-703024 addons disable volumesnapshots --alsologtostderr -v=1: (1.32196045s)
--- PASS: TestAddons/parallel/CSI (56.39s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-703024 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-pjzsl" [d0a09545-f10d-4743-aa9a-90bf186e6201] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-pjzsl" [d0a09545-f10d-4743-aa9a-90bf186e6201] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-pjzsl" [d0a09545-f10d-4743-aa9a-90bf186e6201] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003525661s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-703024 addons disable headlamp --alsologtostderr -v=1: (5.608580556s)
--- PASS: TestAddons/parallel/Headlamp (17.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-4jkbj" [09c8f98c-b6c7-4da1-ac92-81be02d6a56d] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004374642s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-703024
--- PASS: TestAddons/parallel/CloudSpanner (5.80s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-703024 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-703024 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703024 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2bdb4490-cb73-48a7-a495-7911e600fa06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2bdb4490-cb73-48a7-a495-7911e600fa06] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2bdb4490-cb73-48a7-a495-7911e600fa06] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00343198s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-703024 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 ssh "cat /opt/local-path-provisioner/pvc-50d57a12-86e5-43f7-b121-a6d8b09e9508_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-703024 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-703024 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-703024 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.132326528s)
--- PASS: TestAddons/parallel/LocalPath (52.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xqk8k" [dd6bbf51-8737-4c2c-9596-00154e1ec52d] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002878538s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-703024
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7npwf" [68dbce2d-79c5-4ebc-92db-2e6068746274] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003555161s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-703024 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-703024 addons disable yakd --alsologtostderr -v=1: (5.61134544s)
--- PASS: TestAddons/parallel/Yakd (11.62s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.05s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-703024
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-703024: (11.816751308s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-703024
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-703024
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-703024
--- PASS: TestAddons/StoppedEnableDisable (12.05s)

                                                
                                    
x
+
TestCertOptions (22.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-783484 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0815 17:42:31.842626  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-783484 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (19.640856462s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-783484 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-783484 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-783484 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-783484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-783484
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-783484: (1.873634074s)
--- PASS: TestCertOptions (22.08s)

                                                
                                    
x
+
TestCertExpiration (224.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-658956 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-658956 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.261779688s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-658956 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-658956 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.63965815s)
helpers_test.go:175: Cleaning up "cert-expiration-658956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-658956
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-658956: (2.377868847s)
--- PASS: TestCertExpiration (224.28s)

                                                
                                    
x
+
TestForceSystemdFlag (25.96s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-687195 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-687195 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.494797219s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-687195 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-687195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-687195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-687195: (2.207664305s)
--- PASS: TestForceSystemdFlag (25.96s)

                                                
                                    
x
+
TestForceSystemdEnv (36.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-207155 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-207155 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.440489102s)
helpers_test.go:175: Cleaning up "force-systemd-env-207155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-207155
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-207155: (6.75457521s)
--- PASS: TestForceSystemdEnv (36.20s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.52s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.52s)

                                                
                                    
x
+
TestErrorSpam/setup (20.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-195771 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-195771 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-195771 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-195771 --driver=docker  --container-runtime=crio: (20.04815556s)
--- PASS: TestErrorSpam/setup (20.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.54s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 start --dry-run
--- PASS: TestErrorSpam/start (0.54s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 stop: (1.163980364s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195771 --log_dir /tmp/nospam-195771 stop
--- PASS: TestErrorSpam/stop (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19450-377193/.minikube/files/etc/test/nested/copy/384091/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-605215 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-605215 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (40.206000256s)
--- PASS: TestFunctional/serial/StartWithProxy (40.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-605215 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-605215 --alsologtostderr -v=8: (26.21630786s)
functional_test.go:663: soft start took 26.217016512s for "functional-605215" cluster.
--- PASS: TestFunctional/serial/SoftStart (26.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-605215 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-605215 cache add registry.k8s.io/pause:3.3: (1.096713089s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-605215 cache add registry.k8s.io/pause:latest: (1.014185593s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-605215 /tmp/TestFunctionalserialCacheCmdcacheadd_local3050348579/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 cache add minikube-local-cache-test:functional-605215
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-605215 cache add minikube-local-cache-test:functional-605215: (1.022134721s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 cache delete minikube-local-cache-test:functional-605215
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-605215
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-605215 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (256.228122ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 kubectl -- --context functional-605215 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-605215 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-605215 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-605215 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.317601542s)
functional_test.go:761: restart took 41.317730505s for "functional-605215" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-605215 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-605215 logs: (1.305378965s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 logs --file /tmp/TestFunctionalserialLogsFileCmd3193877070/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-605215 logs --file /tmp/TestFunctionalserialLogsFileCmd3193877070/001/logs.txt: (1.313699766s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-605215 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-605215
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-605215: exit status 115 (311.360989ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31211 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-605215 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-605215 config get cpus: exit status 14 (68.200137ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-605215 config get cpus: exit status 14 (47.373121ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-605215 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-605215 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 426843: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-605215 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-605215 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (141.656603ms)

                                                
                                                
-- stdout --
	* [functional-605215] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:16:48.096361  426297 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:16:48.096475  426297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:16:48.096486  426297 out.go:358] Setting ErrFile to fd 2...
	I0815 17:16:48.096490  426297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:16:48.096710  426297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:16:48.097284  426297 out.go:352] Setting JSON to false
	I0815 17:16:48.098351  426297 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7160,"bootTime":1723735048,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:16:48.098412  426297 start.go:139] virtualization: kvm guest
	I0815 17:16:48.100563  426297 out.go:177] * [functional-605215] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:16:48.102549  426297 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:16:48.102629  426297 notify.go:220] Checking for updates...
	I0815 17:16:48.105134  426297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:16:48.106466  426297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:16:48.107765  426297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	I0815 17:16:48.109060  426297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:16:48.110243  426297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:16:48.111773  426297 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:16:48.112232  426297 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:16:48.133488  426297 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:16:48.133607  426297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:16:48.182782  426297 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 17:16:48.173588641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:16:48.182946  426297 docker.go:307] overlay module found
	I0815 17:16:48.184819  426297 out.go:177] * Using the docker driver based on existing profile
	I0815 17:16:48.186122  426297 start.go:297] selected driver: docker
	I0815 17:16:48.186144  426297 start.go:901] validating driver "docker" against &{Name:functional-605215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-605215 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:16:48.186265  426297 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:16:48.188360  426297 out.go:201] 
	W0815 17:16:48.189647  426297 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 17:16:48.190834  426297 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-605215 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-605215 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-605215 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (136.229412ms)

                                                
                                                
-- stdout --
	* [functional-605215] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:16:48.422483  426517 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:16:48.422594  426517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:16:48.422602  426517 out.go:358] Setting ErrFile to fd 2...
	I0815 17:16:48.422606  426517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:16:48.422859  426517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:16:48.423354  426517 out.go:352] Setting JSON to false
	I0815 17:16:48.424332  426517 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7160,"bootTime":1723735048,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:16:48.424395  426517 start.go:139] virtualization: kvm guest
	I0815 17:16:48.426398  426517 out.go:177] * [functional-605215] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0815 17:16:48.428602  426517 notify.go:220] Checking for updates...
	I0815 17:16:48.428634  426517 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:16:48.430798  426517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:16:48.432167  426517 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:16:48.433568  426517 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	I0815 17:16:48.434769  426517 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:16:48.435870  426517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:16:48.437348  426517 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:16:48.437781  426517 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:16:48.459147  426517 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:16:48.459255  426517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:16:48.504899  426517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 17:16:48.49613435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:16:48.505014  426517 docker.go:307] overlay module found
	I0815 17:16:48.506784  426517 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0815 17:16:48.507936  426517 start.go:297] selected driver: docker
	I0815 17:16:48.507955  426517 start.go:901] validating driver "docker" against &{Name:functional-605215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-605215 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:16:48.508054  426517 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:16:48.510074  426517 out.go:201] 
	W0815 17:16:48.511263  426517 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 17:16:48.512371  426517 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-605215 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-605215 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-t2ntg" [cdba332f-6d10-42cc-a22a-51e5827b2218] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-t2ntg" [cdba332f-6d10-42cc-a22a-51e5827b2218] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.040313136s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31301
functional_test.go:1675: http://192.168.49.2:31301: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-t2ntg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31301
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [abd741a8-9dd6-4786-aab8-44abd22a682a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003995897s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-605215 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-605215 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-605215 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-605215 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2ec9a6e8-66e6-4703-a6e6-14f449db124f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2ec9a6e8-66e6-4703-a6e6-14f449db124f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004604377s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-605215 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-605215 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-605215 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d9880d56-cf23-46be-bee1-060250545f8a] Pending
helpers_test.go:344: "sp-pod" [d9880d56-cf23-46be-bee1-060250545f8a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d9880d56-cf23-46be-bee1-060250545f8a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002944131s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-605215 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh -n functional-605215 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 cp functional-605215:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd772322395/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh -n functional-605215 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh -n functional-605215 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-605215 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-mt4dx" [38a9769f-54f0-49db-a2cc-d607d9e04db5] Pending
helpers_test.go:344: "mysql-6cdb49bbb-mt4dx" [38a9769f-54f0-49db-a2cc-d607d9e04db5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-mt4dx" [38a9769f-54f0-49db-a2cc-d607d9e04db5] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003827398s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-605215 exec mysql-6cdb49bbb-mt4dx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-605215 exec mysql-6cdb49bbb-mt4dx -- mysql -ppassword -e "show databases;": exit status 1 (119.312186ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-605215 exec mysql-6cdb49bbb-mt4dx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-605215 exec mysql-6cdb49bbb-mt4dx -- mysql -ppassword -e "show databases;": exit status 1 (125.220514ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-605215 exec mysql-6cdb49bbb-mt4dx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-605215 exec mysql-6cdb49bbb-mt4dx -- mysql -ppassword -e "show databases;": exit status 1 (255.683192ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-605215 exec mysql-6cdb49bbb-mt4dx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/384091/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo cat /etc/test/nested/copy/384091/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/384091.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo cat /etc/ssl/certs/384091.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/384091.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo cat /usr/share/ca-certificates/384091.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3840912.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo cat /etc/ssl/certs/3840912.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3840912.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo cat /usr/share/ca-certificates/3840912.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-605215 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-605215 ssh "sudo systemctl is-active docker": exit status 1 (251.745641ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-605215 ssh "sudo systemctl is-active containerd": exit status 1 (300.218083ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-605215 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-605215
localhost/kicbase/echo-server:functional-605215
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-605215 image ls --format short --alsologtostderr:
I0815 17:16:53.387141  427195 out.go:345] Setting OutFile to fd 1 ...
I0815 17:16:53.387361  427195 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:16:53.387368  427195 out.go:358] Setting ErrFile to fd 2...
I0815 17:16:53.387373  427195 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:16:53.387560  427195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
I0815 17:16:53.389045  427195 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:16:53.389335  427195 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:16:53.389799  427195 cli_runner.go:164] Run: docker container inspect functional-605215 --format={{.State.Status}}
I0815 17:16:53.409476  427195 ssh_runner.go:195] Run: systemctl --version
I0815 17:16:53.409523  427195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605215
I0815 17:16:53.425234  427195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/functional-605215/id_rsa Username:docker}
I0815 17:16:53.516589  427195 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-605215 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/nginx                 | alpine             | 1ae23480369fa | 45.1MB |
| localhost/kicbase/echo-server           | functional-605215  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/nginx                 | latest             | 900dca2a61f57 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-605215  | 96505d405c958 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-605215 image ls --format table --alsologtostderr:
I0815 17:16:54.057937  427587 out.go:345] Setting OutFile to fd 1 ...
I0815 17:16:54.058067  427587 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:16:54.058079  427587 out.go:358] Setting ErrFile to fd 2...
I0815 17:16:54.058087  427587 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:16:54.058306  427587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
I0815 17:16:54.058916  427587 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:16:54.059042  427587 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:16:54.059473  427587 cli_runner.go:164] Run: docker container inspect functional-605215 --format={{.State.Status}}
I0815 17:16:54.077543  427587 ssh_runner.go:195] Run: systemctl --version
I0815 17:16:54.077602  427587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605215
I0815 17:16:54.095754  427587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/functional-605215/id_rsa Username:docker}
I0815 17:16:54.184316  427587 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-605215 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags
":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"96505d405c9586d6e2ec3140afbf9672489e93a8a964c3cf70ebac8a48f06493","repoDigests":["localhost/minikube-local-cache-test@sha256:89881cd92ec12d4ea61aba922cd1d87ecbb22e1a895d71a9463cc3a05933ecc3"],"repoTags":["localhost/minikube-local-cache-test:functional-605215"],"size":"3330"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe
50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-605215"],"size":"4943877"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9","docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57"],"repoTags":["docker.io/library/nginx:alpine
"],"size":"45068794"},{"id":"900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed","repoDigests":["docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40","docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e
3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":[
"registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha2
56:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-605215 image ls --format json --alsologtostderr:
I0815 17:16:53.831046  427463 out.go:345] Setting OutFile to fd 1 ...
I0815 17:16:53.831161  427463 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:16:53.831169  427463 out.go:358] Setting ErrFile to fd 2...
I0815 17:16:53.831174  427463 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:16:53.831383  427463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
I0815 17:16:53.831938  427463 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:16:53.832038  427463 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:16:53.832433  427463 cli_runner.go:164] Run: docker container inspect functional-605215 --format={{.State.Status}}
I0815 17:16:53.848938  427463 ssh_runner.go:195] Run: systemctl --version
I0815 17:16:53.849003  427463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605215
I0815 17:16:53.866734  427463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/functional-605215/id_rsa Username:docker}
I0815 17:16:53.960855  427463 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-605215 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 96505d405c9586d6e2ec3140afbf9672489e93a8a964c3cf70ebac8a48f06493
repoDigests:
- localhost/minikube-local-cache-test@sha256:89881cd92ec12d4ea61aba922cd1d87ecbb22e1a895d71a9463cc3a05933ecc3
repoTags:
- localhost/minikube-local-cache-test:functional-605215
size: "3330"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
- docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57
repoTags:
- docker.io/library/nginx:alpine
size: "45068794"
- id: 900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed
repoDigests:
- docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40
- docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-605215
size: "4943877"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-605215 image ls --format yaml --alsologtostderr:
I0815 17:16:53.605494  427306 out.go:345] Setting OutFile to fd 1 ...
I0815 17:16:53.605766  427306 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:16:53.605776  427306 out.go:358] Setting ErrFile to fd 2...
I0815 17:16:53.605782  427306 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:16:53.605995  427306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
I0815 17:16:53.606567  427306 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:16:53.606694  427306 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:16:53.607096  427306 cli_runner.go:164] Run: docker container inspect functional-605215 --format={{.State.Status}}
I0815 17:16:53.627672  427306 ssh_runner.go:195] Run: systemctl --version
I0815 17:16:53.627715  427306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605215
I0815 17:16:53.645527  427306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/functional-605215/id_rsa Username:docker}
I0815 17:16:53.740962  427306 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-605215 ssh pgrep buildkitd: exit status 1 (241.759759ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image build -t localhost/my-image:functional-605215 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-605215 image build -t localhost/my-image:functional-605215 testdata/build --alsologtostderr: (1.625486057s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-605215 image build -t localhost/my-image:functional-605215 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 72fc61687c9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-605215
--> b96874acc0c
Successfully tagged localhost/my-image:functional-605215
b96874acc0c59b58e15e6507468902636009d1680a9ba6dab910daf315ad6788
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-605215 image build -t localhost/my-image:functional-605215 testdata/build --alsologtostderr:
I0815 17:16:54.010506  427564 out.go:345] Setting OutFile to fd 1 ...
I0815 17:16:54.010634  427564 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:16:54.010646  427564 out.go:358] Setting ErrFile to fd 2...
I0815 17:16:54.010652  427564 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:16:54.010871  427564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
I0815 17:16:54.011452  427564 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:16:54.011997  427564 config.go:182] Loaded profile config "functional-605215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:16:54.012453  427564 cli_runner.go:164] Run: docker container inspect functional-605215 --format={{.State.Status}}
I0815 17:16:54.030761  427564 ssh_runner.go:195] Run: systemctl --version
I0815 17:16:54.030822  427564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605215
I0815 17:16:54.048772  427564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/functional-605215/id_rsa Username:docker}
I0815 17:16:54.144534  427564 build_images.go:161] Building image from path: /tmp/build.2676687819.tar
I0815 17:16:54.144619  427564 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 17:16:54.152446  427564 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2676687819.tar
I0815 17:16:54.155476  427564 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2676687819.tar: stat -c "%s %y" /var/lib/minikube/build/build.2676687819.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2676687819.tar': No such file or directory
I0815 17:16:54.155501  427564 ssh_runner.go:362] scp /tmp/build.2676687819.tar --> /var/lib/minikube/build/build.2676687819.tar (3072 bytes)
I0815 17:16:54.177250  427564 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2676687819
I0815 17:16:54.185115  427564 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2676687819 -xf /var/lib/minikube/build/build.2676687819.tar
I0815 17:16:54.193101  427564 crio.go:315] Building image: /var/lib/minikube/build/build.2676687819
I0815 17:16:54.193170  427564 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-605215 /var/lib/minikube/build/build.2676687819 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0815 17:16:55.560613  427564 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-605215 /var/lib/minikube/build/build.2676687819 --cgroup-manager=cgroupfs: (1.367408751s)
I0815 17:16:55.560694  427564 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2676687819
I0815 17:16:55.569352  427564 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2676687819.tar
I0815 17:16:55.577398  427564 build_images.go:217] Built localhost/my-image:functional-605215 from /tmp/build.2676687819.tar
I0815 17:16:55.577433  427564 build_images.go:133] succeeded building to: functional-605215
I0815 17:16:55.577440  427564 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image ls
2024/08/15 17:16:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.146184735s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-605215
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image load --daemon kicbase/echo-server:functional-605215 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-605215 image load --daemon kicbase/echo-server:functional-605215 --alsologtostderr: (2.664614653s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image load --daemon kicbase/echo-server:functional-605215 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-605215 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-605215 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-2j2w2" [53b0f8ba-33c0-45a8-a94f-caa1f4e0361c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-2j2w2" [53b0f8ba-33c0-45a8-a94f-caa1f4e0361c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.015634664s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-605215
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image load --daemon kicbase/echo-server:functional-605215 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image save kicbase/echo-server:functional-605215 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image rm kicbase/echo-server:functional-605215 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "311.044671ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.683062ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "329.127253ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.010186ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdany-port3688986969/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723742198044251854" to /tmp/TestFunctionalparallelMountCmdany-port3688986969/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723742198044251854" to /tmp/TestFunctionalparallelMountCmdany-port3688986969/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723742198044251854" to /tmp/TestFunctionalparallelMountCmdany-port3688986969/001/test-1723742198044251854
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (279.298134ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 17:16 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 17:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 17:16 test-1723742198044251854
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh cat /mount-9p/test-1723742198044251854
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-605215 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [92552a8f-d6c0-4475-9bc1-71611cf1b1cd] Pending
helpers_test.go:344: "busybox-mount" [92552a8f-d6c0-4475-9bc1-71611cf1b1cd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [92552a8f-d6c0-4475-9bc1-71611cf1b1cd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [92552a8f-d6c0-4475-9bc1-71611cf1b1cd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003257482s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-605215 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdany-port3688986969/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 service list -o json
functional_test.go:1494: Took "501.072942ms" to run "out/minikube-linux-amd64 -p functional-605215 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-605215
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 image save --daemon kicbase/echo-server:functional-605215 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-605215
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30389
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30389
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-605215 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-605215 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-605215 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-605215 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 424341: os: process already finished
helpers_test.go:502: unable to terminate pid 424181: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-605215 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-605215 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fafb84df-d2e9-493c-a9c3-d504792f6439] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fafb84df-d2e9-493c-a9c3-d504792f6439] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004408573s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdspecific-port2095627566/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.709406ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdspecific-port2095627566/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-605215 ssh "sudo umount -f /mount-9p": exit status 1 (252.236562ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-605215 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdspecific-port2095627566/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1942394083/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1942394083/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1942394083/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T" /mount1: exit status 1 (371.266166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-605215 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-605215 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1942394083/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1942394083/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-605215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1942394083/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-605215 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.113.152 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-605215 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-605215
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-605215
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-605215
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (100.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-896691 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0815 17:17:31.843098  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:31.850181  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:31.861513  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:31.882920  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:31.924350  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:32.005776  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:32.167295  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:32.488975  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:33.131050  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:34.412699  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:36.975017  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:42.097311  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:17:52.338983  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:18:12.821201  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-896691 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m40.232101835s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 status -v=7 --alsologtostderr
E0815 17:18:53.782505  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/StartCluster (100.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-896691 -- rollout status deployment/busybox: (1.768922008s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-9gjdc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-j8hkb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-psdq6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-9gjdc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-j8hkb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-psdq6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-9gjdc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-j8hkb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-psdq6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-9gjdc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-9gjdc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-j8hkb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-j8hkb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-psdq6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896691 -- exec busybox-7dff88458-psdq6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (34.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-896691 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-896691 -v=7 --alsologtostderr: (34.179712128s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (34.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-896691 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp testdata/cp-test.txt ha-896691:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile878299019/001/cp-test_ha-896691.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691:/home/docker/cp-test.txt ha-896691-m02:/home/docker/cp-test_ha-896691_ha-896691-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m02 "sudo cat /home/docker/cp-test_ha-896691_ha-896691-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691:/home/docker/cp-test.txt ha-896691-m03:/home/docker/cp-test_ha-896691_ha-896691-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m03 "sudo cat /home/docker/cp-test_ha-896691_ha-896691-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691:/home/docker/cp-test.txt ha-896691-m04:/home/docker/cp-test_ha-896691_ha-896691-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m04 "sudo cat /home/docker/cp-test_ha-896691_ha-896691-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp testdata/cp-test.txt ha-896691-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile878299019/001/cp-test_ha-896691-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m02:/home/docker/cp-test.txt ha-896691:/home/docker/cp-test_ha-896691-m02_ha-896691.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691 "sudo cat /home/docker/cp-test_ha-896691-m02_ha-896691.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m02:/home/docker/cp-test.txt ha-896691-m03:/home/docker/cp-test_ha-896691-m02_ha-896691-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m03 "sudo cat /home/docker/cp-test_ha-896691-m02_ha-896691-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m02:/home/docker/cp-test.txt ha-896691-m04:/home/docker/cp-test_ha-896691-m02_ha-896691-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m04 "sudo cat /home/docker/cp-test_ha-896691-m02_ha-896691-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp testdata/cp-test.txt ha-896691-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile878299019/001/cp-test_ha-896691-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m03:/home/docker/cp-test.txt ha-896691:/home/docker/cp-test_ha-896691-m03_ha-896691.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691 "sudo cat /home/docker/cp-test_ha-896691-m03_ha-896691.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m03:/home/docker/cp-test.txt ha-896691-m02:/home/docker/cp-test_ha-896691-m03_ha-896691-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m02 "sudo cat /home/docker/cp-test_ha-896691-m03_ha-896691-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m03:/home/docker/cp-test.txt ha-896691-m04:/home/docker/cp-test_ha-896691-m03_ha-896691-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m04 "sudo cat /home/docker/cp-test_ha-896691-m03_ha-896691-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp testdata/cp-test.txt ha-896691-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile878299019/001/cp-test_ha-896691-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m04:/home/docker/cp-test.txt ha-896691:/home/docker/cp-test_ha-896691-m04_ha-896691.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691 "sudo cat /home/docker/cp-test_ha-896691-m04_ha-896691.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m04:/home/docker/cp-test.txt ha-896691-m02:/home/docker/cp-test_ha-896691-m04_ha-896691-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m02 "sudo cat /home/docker/cp-test_ha-896691-m04_ha-896691-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 cp ha-896691-m04:/home/docker/cp-test.txt ha-896691-m03:/home/docker/cp-test_ha-896691-m04_ha-896691-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 ssh -n ha-896691-m03 "sudo cat /home/docker/cp-test_ha-896691-m04_ha-896691-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-896691 node stop m02 -v=7 --alsologtostderr: (11.790379162s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-896691 status -v=7 --alsologtostderr: exit status 7 (644.805147ms)

                                                
                                                
-- stdout --
	ha-896691
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-896691-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-896691-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-896691-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:20:01.726806  449433 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:20:01.727144  449433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:20:01.727155  449433 out.go:358] Setting ErrFile to fd 2...
	I0815 17:20:01.727159  449433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:20:01.727345  449433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:20:01.727509  449433 out.go:352] Setting JSON to false
	I0815 17:20:01.727537  449433 mustload.go:65] Loading cluster: ha-896691
	I0815 17:20:01.727667  449433 notify.go:220] Checking for updates...
	I0815 17:20:01.728073  449433 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:20:01.728097  449433 status.go:255] checking status of ha-896691 ...
	I0815 17:20:01.728718  449433 cli_runner.go:164] Run: docker container inspect ha-896691 --format={{.State.Status}}
	I0815 17:20:01.746562  449433 status.go:330] ha-896691 host status = "Running" (err=<nil>)
	I0815 17:20:01.746589  449433 host.go:66] Checking if "ha-896691" exists ...
	I0815 17:20:01.746847  449433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691
	I0815 17:20:01.763198  449433 host.go:66] Checking if "ha-896691" exists ...
	I0815 17:20:01.763557  449433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:20:01.763604  449433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691
	I0815 17:20:01.781273  449433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691/id_rsa Username:docker}
	I0815 17:20:01.873736  449433 ssh_runner.go:195] Run: systemctl --version
	I0815 17:20:01.877604  449433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:20:01.887975  449433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:20:01.934864  449433 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-08-15 17:20:01.92565447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:20:01.935442  449433 kubeconfig.go:125] found "ha-896691" server: "https://192.168.49.254:8443"
	I0815 17:20:01.935477  449433 api_server.go:166] Checking apiserver status ...
	I0815 17:20:01.935514  449433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:20:01.945882  449433 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup
	I0815 17:20:01.954238  449433 api_server.go:182] apiserver freezer: "13:freezer:/docker/b9db8034efc52b5b26080c945d3420002981adb79d465d69f932658fe861d8aa/crio/crio-2196fd9a475cb418affe5c0c372116b5b97cb02489f221060771d08bd506cdc2"
	I0815 17:20:01.954301  449433 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b9db8034efc52b5b26080c945d3420002981adb79d465d69f932658fe861d8aa/crio/crio-2196fd9a475cb418affe5c0c372116b5b97cb02489f221060771d08bd506cdc2/freezer.state
	I0815 17:20:01.961555  449433 api_server.go:204] freezer state: "THAWED"
	I0815 17:20:01.961579  449433 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0815 17:20:01.965394  449433 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0815 17:20:01.965416  449433 status.go:422] ha-896691 apiserver status = Running (err=<nil>)
	I0815 17:20:01.965429  449433 status.go:257] ha-896691 status: &{Name:ha-896691 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:20:01.965454  449433 status.go:255] checking status of ha-896691-m02 ...
	I0815 17:20:01.965742  449433 cli_runner.go:164] Run: docker container inspect ha-896691-m02 --format={{.State.Status}}
	I0815 17:20:01.982930  449433 status.go:330] ha-896691-m02 host status = "Stopped" (err=<nil>)
	I0815 17:20:01.982950  449433 status.go:343] host is not running, skipping remaining checks
	I0815 17:20:01.982956  449433 status.go:257] ha-896691-m02 status: &{Name:ha-896691-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:20:01.982979  449433 status.go:255] checking status of ha-896691-m03 ...
	I0815 17:20:01.983210  449433 cli_runner.go:164] Run: docker container inspect ha-896691-m03 --format={{.State.Status}}
	I0815 17:20:01.999702  449433 status.go:330] ha-896691-m03 host status = "Running" (err=<nil>)
	I0815 17:20:01.999728  449433 host.go:66] Checking if "ha-896691-m03" exists ...
	I0815 17:20:01.999985  449433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m03
	I0815 17:20:02.017012  449433 host.go:66] Checking if "ha-896691-m03" exists ...
	I0815 17:20:02.017281  449433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:20:02.017315  449433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m03
	I0815 17:20:02.033929  449433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m03/id_rsa Username:docker}
	I0815 17:20:02.125964  449433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:20:02.136962  449433 kubeconfig.go:125] found "ha-896691" server: "https://192.168.49.254:8443"
	I0815 17:20:02.136990  449433 api_server.go:166] Checking apiserver status ...
	I0815 17:20:02.137022  449433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:20:02.146400  449433 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	I0815 17:20:02.154557  449433 api_server.go:182] apiserver freezer: "13:freezer:/docker/f974eea2caf3f761aef00acda1ff91b5ea2fab4b4ed0893580202ab0ad19dffd/crio/crio-923dddfe36861e32541d5227cc2b44ebd5bc0b7d84b8a41e512297809684141e"
	I0815 17:20:02.154638  449433 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f974eea2caf3f761aef00acda1ff91b5ea2fab4b4ed0893580202ab0ad19dffd/crio/crio-923dddfe36861e32541d5227cc2b44ebd5bc0b7d84b8a41e512297809684141e/freezer.state
	I0815 17:20:02.162070  449433 api_server.go:204] freezer state: "THAWED"
	I0815 17:20:02.162101  449433 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0815 17:20:02.165899  449433 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0815 17:20:02.165925  449433 status.go:422] ha-896691-m03 apiserver status = Running (err=<nil>)
	I0815 17:20:02.165937  449433 status.go:257] ha-896691-m03 status: &{Name:ha-896691-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:20:02.165963  449433 status.go:255] checking status of ha-896691-m04 ...
	I0815 17:20:02.166227  449433 cli_runner.go:164] Run: docker container inspect ha-896691-m04 --format={{.State.Status}}
	I0815 17:20:02.183957  449433 status.go:330] ha-896691-m04 host status = "Running" (err=<nil>)
	I0815 17:20:02.183982  449433 host.go:66] Checking if "ha-896691-m04" exists ...
	I0815 17:20:02.184234  449433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896691-m04
	I0815 17:20:02.200869  449433 host.go:66] Checking if "ha-896691-m04" exists ...
	I0815 17:20:02.201123  449433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:20:02.201156  449433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896691-m04
	I0815 17:20:02.217659  449433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/ha-896691-m04/id_rsa Username:docker}
	I0815 17:20:02.313210  449433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:20:02.323226  449433 status.go:257] ha-896691-m04 status: &{Name:ha-896691-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 node start m02 -v=7 --alsologtostderr
E0815 17:20:15.703980  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-896691 node start m02 -v=7 --alsologtostderr: (21.602668615s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.801697356s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (182.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-896691 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-896691 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-896691 -v=7 --alsologtostderr: (36.528928386s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-896691 --wait=true -v=7 --alsologtostderr
E0815 17:21:27.415778  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:27.422124  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:27.433480  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:27.454862  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:27.496249  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:27.577723  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:27.739235  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:28.060918  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:28.702992  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:29.984631  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:32.546213  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:37.668534  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:47.910094  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:22:08.392422  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:22:31.842994  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:22:49.354479  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:22:59.545778  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-896691 --wait=true -v=7 --alsologtostderr: (2m25.818254291s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-896691
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (182.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 stop -v=7 --alsologtostderr
E0815 17:24:11.275855  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-896691 stop -v=7 --alsologtostderr: (35.296950565s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-896691 status -v=7 --alsologtostderr: exit status 7 (94.578896ms)

                                                
                                                
-- stdout --
	ha-896691
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-896691-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-896691-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:24:20.923261  468294 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:24:20.923384  468294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:24:20.923394  468294 out.go:358] Setting ErrFile to fd 2...
	I0815 17:24:20.923401  468294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:24:20.923598  468294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:24:20.923781  468294 out.go:352] Setting JSON to false
	I0815 17:24:20.923815  468294 mustload.go:65] Loading cluster: ha-896691
	I0815 17:24:20.923913  468294 notify.go:220] Checking for updates...
	I0815 17:24:20.924282  468294 config.go:182] Loaded profile config "ha-896691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:24:20.924302  468294 status.go:255] checking status of ha-896691 ...
	I0815 17:24:20.924728  468294 cli_runner.go:164] Run: docker container inspect ha-896691 --format={{.State.Status}}
	I0815 17:24:20.941196  468294 status.go:330] ha-896691 host status = "Stopped" (err=<nil>)
	I0815 17:24:20.941217  468294 status.go:343] host is not running, skipping remaining checks
	I0815 17:24:20.941225  468294 status.go:257] ha-896691 status: &{Name:ha-896691 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:24:20.941279  468294 status.go:255] checking status of ha-896691-m02 ...
	I0815 17:24:20.941653  468294 cli_runner.go:164] Run: docker container inspect ha-896691-m02 --format={{.State.Status}}
	I0815 17:24:20.958343  468294 status.go:330] ha-896691-m02 host status = "Stopped" (err=<nil>)
	I0815 17:24:20.958363  468294 status.go:343] host is not running, skipping remaining checks
	I0815 17:24:20.958369  468294 status.go:257] ha-896691-m02 status: &{Name:ha-896691-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:24:20.958393  468294 status.go:255] checking status of ha-896691-m04 ...
	I0815 17:24:20.958628  468294 cli_runner.go:164] Run: docker container inspect ha-896691-m04 --format={{.State.Status}}
	I0815 17:24:20.973586  468294 status.go:330] ha-896691-m04 host status = "Stopped" (err=<nil>)
	I0815 17:24:20.973634  468294 status.go:343] host is not running, skipping remaining checks
	I0815 17:24:20.973647  468294 status.go:257] ha-896691-m04 status: &{Name:ha-896691-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (49.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-896691 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-896691 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (48.998873075s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (49.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (53.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-896691 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-896691 --control-plane -v=7 --alsologtostderr: (52.323773982s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-896691 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (53.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-343918 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0815 17:26:27.414421  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-343918 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (40.062964405s)
--- PASS: TestJSONOutput/start/Command (40.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-343918 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-343918 --output=json --user=testUser
E0815 17:26:55.117230  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-343918 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-343918 --output=json --user=testUser: (5.755748544s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-699244 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-699244 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.07716ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1eb651f0-d425-4ca1-8019-2951dee09709","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-699244] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1b59887-1e6d-4bf9-b97c-994a80f16f61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19450"}}
	{"specversion":"1.0","id":"cabe9e60-9ebd-4ab9-b35c-b1a95e388aff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2a03393e-4346-4e8a-9cf8-c4bb342fbaff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig"}}
	{"specversion":"1.0","id":"5372a489-1ad6-4af7-849b-78b3a5051762","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube"}}
	{"specversion":"1.0","id":"45272a76-1684-4866-bbfc-e518d34fddc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c1b8d632-e15a-4c1a-80d1-781bb26bb813","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"098cbcf6-532c-4c79-99c7-36bed1935bc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-699244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-699244
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-817605 --network=
E0815 17:27:31.842820  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-817605 --network=: (26.004012244s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-817605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-817605
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-817605: (2.050048284s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-735611 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-735611 --network=bridge: (21.101719851s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-735611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-735611
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-735611: (1.898072935s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.02s)

                                                
                                    
x
+
TestKicExistingNetwork (21.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-770907 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-770907 --network=existing-network: (19.915410259s)
helpers_test.go:175: Cleaning up "existing-network-770907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-770907
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-770907: (1.890126975s)
--- PASS: TestKicExistingNetwork (21.94s)

                                                
                                    
x
+
TestKicCustomSubnet (23.33s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-757409 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-757409 --subnet=192.168.60.0/24: (21.371450834s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-757409 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-757409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-757409
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-757409: (1.937754184s)
--- PASS: TestKicCustomSubnet (23.33s)

                                                
                                    
x
+
TestKicStaticIP (25.24s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-384231 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-384231 --static-ip=192.168.200.200: (23.143747555s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-384231 ip
helpers_test.go:175: Cleaning up "static-ip-384231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-384231
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-384231: (1.975129048s)
--- PASS: TestKicStaticIP (25.24s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-728260 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-728260 --driver=docker  --container-runtime=crio: (22.93198765s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-731266 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-731266 --driver=docker  --container-runtime=crio: (21.016858726s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-728260
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-731266
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-731266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-731266
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-731266: (1.849239174s)
helpers_test.go:175: Cleaning up "first-728260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-728260
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-728260: (2.240178417s)
--- PASS: TestMinikubeProfile (49.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-138682 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-138682 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.326921152s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-138682 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-153114 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-153114 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.492470619s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-153114 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-138682 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-138682 --alsologtostderr -v=5: (1.583461814s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-153114 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-153114
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-153114: (1.164558816s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-153114
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-153114: (6.176280195s)
--- PASS: TestMountStart/serial/RestartStopped (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-153114 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-283401 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0815 17:31:27.415428  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-283401 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m11.239531226s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-283401 -- rollout status deployment/busybox: (1.790329543s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- exec busybox-7dff88458-vnstv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- exec busybox-7dff88458-xblpb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- exec busybox-7dff88458-vnstv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- exec busybox-7dff88458-xblpb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- exec busybox-7dff88458-vnstv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- exec busybox-7dff88458-xblpb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- exec busybox-7dff88458-vnstv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- exec busybox-7dff88458-vnstv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- exec busybox-7dff88458-xblpb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-283401 -- exec busybox-7dff88458-xblpb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-283401 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-283401 -v 3 --alsologtostderr: (27.751678604s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-283401 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp testdata/cp-test.txt multinode-283401:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp multinode-283401:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3078619329/001/cp-test_multinode-283401.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp multinode-283401:/home/docker/cp-test.txt multinode-283401-m02:/home/docker/cp-test_multinode-283401_multinode-283401-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m02 "sudo cat /home/docker/cp-test_multinode-283401_multinode-283401-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp multinode-283401:/home/docker/cp-test.txt multinode-283401-m03:/home/docker/cp-test_multinode-283401_multinode-283401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m03 "sudo cat /home/docker/cp-test_multinode-283401_multinode-283401-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp testdata/cp-test.txt multinode-283401-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp multinode-283401-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3078619329/001/cp-test_multinode-283401-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp multinode-283401-m02:/home/docker/cp-test.txt multinode-283401:/home/docker/cp-test_multinode-283401-m02_multinode-283401.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401 "sudo cat /home/docker/cp-test_multinode-283401-m02_multinode-283401.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp multinode-283401-m02:/home/docker/cp-test.txt multinode-283401-m03:/home/docker/cp-test_multinode-283401-m02_multinode-283401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m03 "sudo cat /home/docker/cp-test_multinode-283401-m02_multinode-283401-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp testdata/cp-test.txt multinode-283401-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp multinode-283401-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3078619329/001/cp-test_multinode-283401-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp multinode-283401-m03:/home/docker/cp-test.txt multinode-283401:/home/docker/cp-test_multinode-283401-m03_multinode-283401.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401 "sudo cat /home/docker/cp-test_multinode-283401-m03_multinode-283401.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 cp multinode-283401-m03:/home/docker/cp-test.txt multinode-283401-m02:/home/docker/cp-test_multinode-283401-m03_multinode-283401-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 ssh -n multinode-283401-m02 "sudo cat /home/docker/cp-test_multinode-283401-m03_multinode-283401-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-283401 node stop m03: (1.165102266s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-283401 status: exit status 7 (448.501594ms)

                                                
                                                
-- stdout --
	multinode-283401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-283401-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-283401-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-283401 status --alsologtostderr: exit status 7 (448.565357ms)

                                                
                                                
-- stdout --
	multinode-283401
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-283401-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-283401-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:32:14.627449  533686 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:32:14.627559  533686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:32:14.627568  533686 out.go:358] Setting ErrFile to fd 2...
	I0815 17:32:14.627576  533686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:32:14.627778  533686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:32:14.627988  533686 out.go:352] Setting JSON to false
	I0815 17:32:14.628021  533686 mustload.go:65] Loading cluster: multinode-283401
	I0815 17:32:14.628125  533686 notify.go:220] Checking for updates...
	I0815 17:32:14.628470  533686 config.go:182] Loaded profile config "multinode-283401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:32:14.628488  533686 status.go:255] checking status of multinode-283401 ...
	I0815 17:32:14.628923  533686 cli_runner.go:164] Run: docker container inspect multinode-283401 --format={{.State.Status}}
	I0815 17:32:14.645875  533686 status.go:330] multinode-283401 host status = "Running" (err=<nil>)
	I0815 17:32:14.645909  533686 host.go:66] Checking if "multinode-283401" exists ...
	I0815 17:32:14.646196  533686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-283401
	I0815 17:32:14.663221  533686 host.go:66] Checking if "multinode-283401" exists ...
	I0815 17:32:14.663476  533686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:32:14.663520  533686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-283401
	I0815 17:32:14.679491  533686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/multinode-283401/id_rsa Username:docker}
	I0815 17:32:14.773369  533686 ssh_runner.go:195] Run: systemctl --version
	I0815 17:32:14.777173  533686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:32:14.787014  533686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:32:14.833563  533686 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-08-15 17:32:14.824185435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:32:14.834124  533686 kubeconfig.go:125] found "multinode-283401" server: "https://192.168.67.2:8443"
	I0815 17:32:14.834153  533686 api_server.go:166] Checking apiserver status ...
	I0815 17:32:14.834185  533686 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:32:14.844233  533686 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1512/cgroup
	I0815 17:32:14.852247  533686 api_server.go:182] apiserver freezer: "13:freezer:/docker/e7a05503a0754390ee93cd6283e37fd08b6eab8755d32ec4ca9d6af94f91c2e9/crio/crio-b44153cf8bb328184a296e9c469984ba2f37d881454665b919e3fb1b9494120c"
	I0815 17:32:14.852300  533686 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e7a05503a0754390ee93cd6283e37fd08b6eab8755d32ec4ca9d6af94f91c2e9/crio/crio-b44153cf8bb328184a296e9c469984ba2f37d881454665b919e3fb1b9494120c/freezer.state
	I0815 17:32:14.859574  533686 api_server.go:204] freezer state: "THAWED"
	I0815 17:32:14.859605  533686 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0815 17:32:14.863928  533686 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0815 17:32:14.863952  533686 status.go:422] multinode-283401 apiserver status = Running (err=<nil>)
	I0815 17:32:14.863964  533686 status.go:257] multinode-283401 status: &{Name:multinode-283401 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:32:14.863990  533686 status.go:255] checking status of multinode-283401-m02 ...
	I0815 17:32:14.864243  533686 cli_runner.go:164] Run: docker container inspect multinode-283401-m02 --format={{.State.Status}}
	I0815 17:32:14.881286  533686 status.go:330] multinode-283401-m02 host status = "Running" (err=<nil>)
	I0815 17:32:14.881309  533686 host.go:66] Checking if "multinode-283401-m02" exists ...
	I0815 17:32:14.881589  533686 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-283401-m02
	I0815 17:32:14.898188  533686 host.go:66] Checking if "multinode-283401-m02" exists ...
	I0815 17:32:14.898464  533686 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:32:14.898509  533686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-283401-m02
	I0815 17:32:14.914418  533686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19450-377193/.minikube/machines/multinode-283401-m02/id_rsa Username:docker}
	I0815 17:32:15.005242  533686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:32:15.015166  533686 status.go:257] multinode-283401-m02 status: &{Name:multinode-283401-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:32:15.015197  533686 status.go:255] checking status of multinode-283401-m03 ...
	I0815 17:32:15.015450  533686 cli_runner.go:164] Run: docker container inspect multinode-283401-m03 --format={{.State.Status}}
	I0815 17:32:15.031540  533686 status.go:330] multinode-283401-m03 host status = "Stopped" (err=<nil>)
	I0815 17:32:15.031571  533686 status.go:343] host is not running, skipping remaining checks
	I0815 17:32:15.031577  533686 status.go:257] multinode-283401-m03 status: &{Name:multinode-283401-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-283401 node start m03 -v=7 --alsologtostderr: (8.27951873s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (93.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-283401
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-283401
E0815 17:32:31.843333  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-283401: (24.618340201s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-283401 --wait=true -v=8 --alsologtostderr
E0815 17:33:54.907510  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-283401 --wait=true -v=8 --alsologtostderr: (1m8.918479609s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-283401
--- PASS: TestMultiNode/serial/RestartKeepsNodes (93.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-283401 node delete m03: (4.65061107s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-283401 stop: (23.573609219s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-283401 status: exit status 7 (78.884759ms)

                                                
                                                
-- stdout --
	multinode-283401
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-283401-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-283401 status --alsologtostderr: exit status 7 (80.797969ms)

                                                
                                                
-- stdout --
	multinode-283401
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-283401-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:34:26.475966  543377 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:34:26.476215  543377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:34:26.476224  543377 out.go:358] Setting ErrFile to fd 2...
	I0815 17:34:26.476228  543377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:34:26.476435  543377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:34:26.476656  543377 out.go:352] Setting JSON to false
	I0815 17:34:26.476695  543377 mustload.go:65] Loading cluster: multinode-283401
	I0815 17:34:26.476811  543377 notify.go:220] Checking for updates...
	I0815 17:34:26.477060  543377 config.go:182] Loaded profile config "multinode-283401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:34:26.477077  543377 status.go:255] checking status of multinode-283401 ...
	I0815 17:34:26.477451  543377 cli_runner.go:164] Run: docker container inspect multinode-283401 --format={{.State.Status}}
	I0815 17:34:26.496670  543377 status.go:330] multinode-283401 host status = "Stopped" (err=<nil>)
	I0815 17:34:26.496689  543377 status.go:343] host is not running, skipping remaining checks
	I0815 17:34:26.496695  543377 status.go:257] multinode-283401 status: &{Name:multinode-283401 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:34:26.496727  543377 status.go:255] checking status of multinode-283401-m02 ...
	I0815 17:34:26.496965  543377 cli_runner.go:164] Run: docker container inspect multinode-283401-m02 --format={{.State.Status}}
	I0815 17:34:26.513348  543377 status.go:330] multinode-283401-m02 host status = "Stopped" (err=<nil>)
	I0815 17:34:26.513392  543377 status.go:343] host is not running, skipping remaining checks
	I0815 17:34:26.513404  543377 status.go:257] multinode-283401-m02 status: &{Name:multinode-283401-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-283401 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-283401 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (48.867142911s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-283401 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.42s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-283401
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-283401-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-283401-m02 --driver=docker  --container-runtime=crio: exit status 14 (63.050667ms)

                                                
                                                
-- stdout --
	* [multinode-283401-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-283401-m02' is duplicated with machine name 'multinode-283401-m02' in profile 'multinode-283401'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-283401-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-283401-m03 --driver=docker  --container-runtime=crio: (19.993582786s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-283401
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-283401: exit status 80 (260.132082ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-283401 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-283401-m03 already exists in multinode-283401-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-283401-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-283401-m03: (1.819012555s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.18s)

                                                
                                    
x
+
TestPreload (119.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-699085 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0815 17:36:27.414467  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-699085 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.877208079s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-699085 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-699085
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-699085: (5.661479649s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-699085 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0815 17:37:31.843306  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-699085 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.607768018s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-699085 image list
helpers_test.go:175: Cleaning up "test-preload-699085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-699085
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-699085: (2.313087811s)
--- PASS: TestPreload (119.44s)

                                                
                                    
x
+
TestScheduledStopUnix (95.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-461095 --memory=2048 --driver=docker  --container-runtime=crio
E0815 17:37:50.479476  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-461095 --memory=2048 --driver=docker  --container-runtime=crio: (20.049204392s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-461095 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-461095 -n scheduled-stop-461095
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-461095 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-461095 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-461095 -n scheduled-stop-461095
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-461095
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-461095 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-461095
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-461095: exit status 7 (64.151987ms)

                                                
                                                
-- stdout --
	scheduled-stop-461095
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-461095 -n scheduled-stop-461095
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-461095 -n scheduled-stop-461095: exit status 7 (62.128714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-461095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-461095
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-461095: (4.382610255s)
--- PASS: TestScheduledStopUnix (95.71s)

                                                
                                    
x
+
TestInsufficientStorage (9.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-183056 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-183056 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.121828628s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4e72a109-93bc-4dc9-825e-4bbffa3af80e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-183056] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bb4f548-d796-423b-a2f2-2213af7a3456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19450"}}
	{"specversion":"1.0","id":"42f34c09-cc73-4dab-b01a-6455276348ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dab3e063-e51c-43c8-8efe-ab973cffe823","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig"}}
	{"specversion":"1.0","id":"5c655dd7-22c6-4888-a1b0-523924b4049e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube"}}
	{"specversion":"1.0","id":"6190c765-cd5e-4ace-b91f-a70b30619d23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9bb577ab-6f7c-4c72-8cd4-f4a85fd9971a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"82ac70e3-9723-45a4-b8d2-46492bee9b38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"02ab9b7c-fec8-4050-97ab-69d4057cd655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cb9020cc-2f44-4fd5-baff-3e495fd10d47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c83b4363-1dcf-46da-9143-2fb84455eafe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5ac045c0-3581-42b1-a656-33bc133a581b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-183056\" primary control-plane node in \"insufficient-storage-183056\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c60ab82-b00e-4a89-9464-7ba9408d3e8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723650208-19443 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6ea8f9e-c1e1-49fa-abff-244ca80c8293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7ea5aaa-70dc-43a3-9246-7e175a31617f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-183056 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-183056 --output=json --layout=cluster: exit status 7 (252.157221ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-183056","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-183056","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:39:24.572406  565749 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-183056" does not appear in /home/jenkins/minikube-integration/19450-377193/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-183056 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-183056 --output=json --layout=cluster: exit status 7 (247.617122ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-183056","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-183056","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:39:24.820867  565850 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-183056" does not appear in /home/jenkins/minikube-integration/19450-377193/kubeconfig
	E0815 17:39:24.830196  565850 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/insufficient-storage-183056/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-183056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-183056
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-183056: (1.787106158s)
--- PASS: TestInsufficientStorage (9.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (59.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4124575132 start -p running-upgrade-578639 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4124575132 start -p running-upgrade-578639 --memory=2200 --vm-driver=docker  --container-runtime=crio: (29.357166177s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-578639 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-578639 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.897810194s)
helpers_test.go:175: Cleaning up "running-upgrade-578639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-578639
E0815 17:41:27.415447  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-578639: (4.393203341s)
--- PASS: TestRunningBinaryUpgrade (59.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-892529 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-892529 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.970237382s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-892529
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-892529: (3.729250197s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-892529 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-892529 status --format={{.Host}}: exit status 7 (63.301822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-892529 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-892529 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m32.208825462s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-892529 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-892529 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-892529 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (75.561898ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-892529] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-892529
	    minikube start -p kubernetes-upgrade-892529 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8925292 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-892529 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-892529 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-892529 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.428534167s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-892529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-892529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-892529: (2.153913315s)
--- PASS: TestKubernetesUpgrade (344.74s)

                                                
                                    
x
+
TestMissingContainerUpgrade (113.72s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2739824514 start -p missing-upgrade-624423 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2739824514 start -p missing-upgrade-624423 --memory=2200 --driver=docker  --container-runtime=crio: (45.287679284s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-624423
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-624423: (1.637421702s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-624423
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-624423 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-624423 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.827517523s)
helpers_test.go:175: Cleaning up "missing-upgrade-624423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-624423
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-624423: (1.961024408s)
--- PASS: TestMissingContainerUpgrade (113.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194544 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-194544 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (78.713019ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-194544] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194544 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194544 --driver=docker  --container-runtime=crio: (29.792165528s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-194544 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-491279 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-491279 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (183.316462ms)

                                                
                                                
-- stdout --
	* [false-491279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:39:30.505677  568158 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:39:30.506012  568158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:39:30.506022  568158 out.go:358] Setting ErrFile to fd 2...
	I0815 17:39:30.506029  568158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:39:30.506311  568158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-377193/.minikube/bin
	I0815 17:39:30.507154  568158 out.go:352] Setting JSON to false
	I0815 17:39:30.508708  568158 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8522,"bootTime":1723735048,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:39:30.508798  568158 start.go:139] virtualization: kvm guest
	I0815 17:39:30.511598  568158 out.go:177] * [false-491279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:39:30.513290  568158 notify.go:220] Checking for updates...
	I0815 17:39:30.513329  568158 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:39:30.515283  568158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:39:30.516580  568158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-377193/kubeconfig
	I0815 17:39:30.517754  568158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-377193/.minikube
	I0815 17:39:30.518895  568158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:39:30.520122  568158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:39:30.522053  568158 config.go:182] Loaded profile config "NoKubernetes-194544": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:39:30.522218  568158 config.go:182] Loaded profile config "force-systemd-env-207155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:39:30.522343  568158 config.go:182] Loaded profile config "offline-crio-157216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:39:30.522455  568158 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:39:30.553766  568158 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:39:30.553941  568158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:39:30.620905  568158 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:93 SystemTime:2024-08-15 17:39:30.609769418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0815 17:39:30.621062  568158 docker.go:307] overlay module found
	I0815 17:39:30.624069  568158 out.go:177] * Using the docker driver based on user configuration
	I0815 17:39:30.625615  568158 start.go:297] selected driver: docker
	I0815 17:39:30.625637  568158 start.go:901] validating driver "docker" against <nil>
	I0815 17:39:30.625650  568158 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:39:30.628222  568158 out.go:201] 
	W0815 17:39:30.629740  568158 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0815 17:39:30.631151  568158 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-491279 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-491279" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-491279

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491279"

                                                
                                                
----------------------- debugLogs end: false-491279 [took: 7.426377353s] --------------------------------
helpers_test.go:175: Cleaning up "false-491279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-491279
--- PASS: TestNetworkPlugins/group/false (7.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (94.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1291406815 start -p stopped-upgrade-327806 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1291406815 start -p stopped-upgrade-327806 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.925449355s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1291406815 -p stopped-upgrade-327806 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1291406815 -p stopped-upgrade-327806 stop: (2.460583142s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-327806 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-327806 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.133092847s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (94.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194544 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194544 --no-kubernetes --driver=docker  --container-runtime=crio: (8.850815193s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-194544 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-194544 status -o json: exit status 2 (336.295385ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-194544","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-194544
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-194544: (2.390769813s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194544 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194544 --no-kubernetes --driver=docker  --container-runtime=crio: (5.423416016s)
--- PASS: TestNoKubernetes/serial/Start (5.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-194544 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-194544 "sudo systemctl is-active --quiet service kubelet": exit status 1 (241.578811ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-194544
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-194544: (1.206987157s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-194544 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-194544 --driver=docker  --container-runtime=crio: (8.751024544s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-194544 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-194544 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.032367ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-327806
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-327806: (1.179049225s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestPause/serial/Start (49.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-210872 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-210872 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.668093406s)
--- PASS: TestPause/serial/Start (49.67s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-210872 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-210872 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.038506857s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (46.765822682s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.77s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-210872 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-210872 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-210872 --output=json --layout=cluster: exit status 2 (281.001239ms)

                                                
                                                
-- stdout --
	{"Name":"pause-210872","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-210872","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-210872 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-210872 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.64s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-210872 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-210872 --alsologtostderr -v=5: (2.637623088s)
--- PASS: TestPause/serial/DeletePaused (2.64s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-210872
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-210872: exit status 1 (15.393052ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-210872: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.590379022s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-491279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-491279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vsljv" [b9c29f76-21c9-4ca9-9803-63e2b8f2ed5a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vsljv" [b9c29f76-21c9-4ca9-9803-63e2b8f2ed5a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004099685s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7lzgl" [bf3d4779-f61a-41e6-91f6-123007a78cec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003687288s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-491279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-491279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-491279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gl8j7" [3792d1f6-d683-4d20-b93a-285e43642282] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gl8j7" [3792d1f6-d683-4d20-b93a-285e43642282] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00341611s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-491279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (56.198420877s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (45.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (45.848334545s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (45.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-sq8rt" [974facfc-3534-439e-8fb3-9079f8d8350c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.036489011s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-491279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-491279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sqcz6" [d903140c-98db-410b-9f2c-4a7a08db14fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sqcz6" [d903140c-98db-410b-9f2c-4a7a08db14fe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003066854s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-491279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-491279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gk8zw" [62336696-c7b1-4271-929e-35bb5d001db1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gk8zw" [62336696-c7b1-4271-929e-35bb5d001db1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004232824s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-491279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-491279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (38.076978293s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.894764228s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-491279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (39.766687427s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-491279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-491279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cj8tc" [d2a1a742-cd3c-4a83-8f22-1a79755886f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cj8tc" [d2a1a742-cd3c-4a83-8f22-1a79755886f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004520893s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (20.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-491279 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-491279 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132957924s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-491279 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-491279 exec deployment/netcat -- nslookup kubernetes.default: (5.122531699s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (20.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (107.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-230056 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-230056 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (1m47.091498857s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (107.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-491279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-491279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fwdgd" [3eab78af-6e1b-49db-9bfd-83a4a3c1ec63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fwdgd" [3eab78af-6e1b-49db-9bfd-83a4a3c1ec63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003440353s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-fcgwh" [9053b848-a350-475b-b0f5-543655bf6b76] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003736536s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-491279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-491279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-491279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vdkkh" [c2b082bc-0152-41f9-82e8-435f9b3aa944] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 17:46:27.415218  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vdkkh" [c2b082bc-0152-41f9-82e8-435f9b3aa944] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003985168s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-491279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-491279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)
E0815 17:51:52.776862  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:52:00.896341  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-072672 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-072672 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (48.389908473s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-599634 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-599634 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m1.601330696s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-520249 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 17:47:31.842449  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-520249 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (44.187854111s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-072672 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [48d76835-9e45-4e75-8673-d154fe1eebc8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [48d76835-9e45-4e75-8673-d154fe1eebc8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003303311s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-072672 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-072672 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-072672 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-072672 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-072672 --alsologtostderr -v=3: (11.902880147s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-520249 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4147d45b-52a4-4e49-a97f-e31984b52ff5] Pending
helpers_test.go:344: "busybox" [4147d45b-52a4-4e49-a97f-e31984b52ff5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4147d45b-52a4-4e49-a97f-e31984b52ff5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.006004746s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-520249 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-599634 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72706dc1-ad40-4fd1-ac7d-bb895cd7c692] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [72706dc1-ad40-4fd1-ac7d-bb895cd7c692] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004280436s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-599634 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-520249 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-520249 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-520249 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-520249 --alsologtostderr -v=3: (11.804434567s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072672 -n embed-certs-072672
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072672 -n embed-certs-072672: exit status 7 (63.407923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-072672 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (261.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-072672 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-072672 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m21.411258947s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072672 -n embed-certs-072672
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (261.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-599634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-599634 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-599634 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-599634 --alsologtostderr -v=3: (11.81651701s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-230056 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [86040cc3-76d6-4d8b-aa15-40e0d78d1052] Pending
helpers_test.go:344: "busybox" [86040cc3-76d6-4d8b-aa15-40e0d78d1052] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [86040cc3-76d6-4d8b-aa15-40e0d78d1052] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003814996s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-230056 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520249 -n default-k8s-diff-port-520249
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520249 -n default-k8s-diff-port-520249: exit status 7 (77.913562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-520249 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-520249 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-520249 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m31.023711103s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-520249 -n default-k8s-diff-port-520249
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-230056 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-230056 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-230056 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-230056 --alsologtostderr -v=3: (12.003363716s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599634 -n no-preload-599634
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599634 -n no-preload-599634: exit status 7 (69.28946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-599634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (298.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-599634 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-599634 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m58.532081507s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599634 -n no-preload-599634
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (298.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-230056 -n old-k8s-version-230056
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-230056 -n old-k8s-version-230056: exit status 7 (74.895229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-230056 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (144.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-230056 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0815 17:48:31.753010  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:31.759455  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:31.770867  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:31.792296  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:31.833803  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:31.915848  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:32.077925  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:32.399343  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:33.041228  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:34.323079  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:36.436941  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:36.443335  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:36.454706  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:36.476113  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:36.517411  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:36.598897  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:36.761123  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:36.884732  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:37.083384  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:37.724713  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:39.006867  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:41.568426  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:42.006623  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:46.690311  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:52.248958  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:48:56.932714  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:12.730466  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:17.414829  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:53.692458  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:56.045693  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:56.052069  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:56.063509  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:56.085350  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:56.126749  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:56.208245  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:56.369743  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:56.691717  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:57.333763  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:58.377032  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:58.509601  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:58.515988  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:58.527341  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:58.548710  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:58.590417  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:58.615837  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:58.672211  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:58.833812  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:59.155661  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:59.797489  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:01.078825  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:01.177217  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:03.640930  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:06.298898  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:08.762800  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:16.540596  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:19.004965  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:34.909783  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/addons-703024/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:37.022143  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:39.487246  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-230056 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m24.402485565s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-230056 -n old-k8s-version-230056
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (144.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ffqpf" [8343821d-104c-4cab-ae7f-494a4fcf4bd4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003964581s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ffqpf" [8343821d-104c-4cab-ae7f-494a4fcf4bd4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00335515s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-230056 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-230056 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-230056 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-230056 -n old-k8s-version-230056
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-230056 -n old-k8s-version-230056: exit status 2 (284.130357ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-230056 -n old-k8s-version-230056
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-230056 -n old-k8s-version-230056: exit status 2 (283.217107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-230056 --alsologtostderr -v=1
E0815 17:50:54.409340  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:54.416189  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:54.427582  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:54.448887  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:54.490647  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:54.572296  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:50:54.734379  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-230056 -n old-k8s-version-230056
E0815 17:50:55.056877  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-230056 -n old-k8s-version-230056
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-623848 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 17:50:59.543580  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:04.665617  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:11.800524  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:11.807022  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:11.818418  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:11.839806  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:11.881348  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:11.963031  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:12.125025  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:12.446832  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:13.089081  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:14.370577  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:14.907682  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:15.614646  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/auto-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:16.931933  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:17.984092  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:19.920685  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:19.927026  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:19.938350  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:19.959686  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:20.001075  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:20.082591  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:20.244093  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:20.298582  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/kindnet-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:20.449277  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:20.565745  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:21.207069  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:22.053380  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:22.488588  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-623848 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (26.455400699s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-623848 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0815 17:51:25.050906  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-623848 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-623848 --alsologtostderr -v=3: (1.192851434s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-623848 -n newest-cni-623848
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-623848 -n newest-cni-623848: exit status 7 (63.132002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-623848 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-623848 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 17:51:27.414521  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/functional-605215/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:30.172680  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:32.295440  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:51:35.389634  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-623848 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (12.706631457s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-623848 -n newest-cni-623848
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-623848 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-623848 --alsologtostderr -v=1
E0815 17:51:40.413961  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-623848 -n newest-cni-623848
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-623848 -n newest-cni-623848: exit status 2 (315.234518ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-623848 -n newest-cni-623848
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-623848 -n newest-cni-623848: exit status 2 (324.63378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-623848 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-623848 -n newest-cni-623848
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-623848 -n newest-cni-623848
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jfx6q" [53ae8678-6a56-41b5-8b05-d66e45a019d4] Running
E0815 17:52:16.351559  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/enable-default-cni-491279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004289055s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jfx6q" [53ae8678-6a56-41b5-8b05-d66e45a019d4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003870845s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-072672 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-072672 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-072672 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072672 -n embed-certs-072672
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072672 -n embed-certs-072672: exit status 2 (304.630381ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-072672 -n embed-certs-072672
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-072672 -n embed-certs-072672: exit status 2 (308.441961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-072672 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072672 -n embed-certs-072672
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-072672 -n embed-certs-072672
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ffcbx" [7608b4d8-a336-4bd9-bb9e-20ea0ea053fd] Running
E0815 17:52:33.738960  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/bridge-491279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003533705s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ffcbx" [7608b4d8-a336-4bd9-bb9e-20ea0ea053fd] Running
E0815 17:52:39.905650  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/calico-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:52:41.858376  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:52:42.371264  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/custom-flannel-491279/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003805055s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-520249 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-520249 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-520249 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520249 -n default-k8s-diff-port-520249
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520249 -n default-k8s-diff-port-520249: exit status 2 (281.8522ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-520249 -n default-k8s-diff-port-520249
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-520249 -n default-k8s-diff-port-520249: exit status 2 (273.232564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-520249 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-520249 -n default-k8s-diff-port-520249
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-520249 -n default-k8s-diff-port-520249
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-stf7r" [6fbfc3ff-1238-426a-b27c-981106975c4e] Running
E0815 17:53:06.158439  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/old-k8s-version-230056/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003729484s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-stf7r" [6fbfc3ff-1238-426a-b27c-981106975c4e] Running
E0815 17:53:16.400044  384091 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-377193/.minikube/profiles/old-k8s-version-230056/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0037087s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-599634 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-599634 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-599634 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-599634 -n no-preload-599634
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-599634 -n no-preload-599634: exit status 2 (274.372993ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-599634 -n no-preload-599634
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-599634 -n no-preload-599634: exit status 2 (276.418882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-599634 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-599634 -n no-preload-599634
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-599634 -n no-preload-599634
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                    

Test skip (25/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-491279 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-491279" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-491279

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491279"

                                                
                                                
----------------------- debugLogs end: kubenet-491279 [took: 3.646283232s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-491279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-491279
--- SKIP: TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-491279 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-491279" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-491279

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-491279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491279"

                                                
                                                
----------------------- debugLogs end: cilium-491279 [took: 4.232324402s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-491279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-491279
--- SKIP: TestNetworkPlugins/group/cilium (4.43s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-174154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-174154
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard