Test Report: Docker_Linux_crio_arm64 19348

                    
                      ed915dc6df1b6eb65e62a5b1fde6a752900efcab:2024-07-30:35561
                    
                

Test fail (3/336)

Order failed test Duration
43 TestAddons/parallel/Ingress 152.73
45 TestAddons/parallel/MetricsServer 310.4
183 TestMultiControlPlane/serial/RestartCluster 127.35
x
+
TestAddons/parallel/Ingress (152.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-261813 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-261813 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-261813 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dfa1d682-affa-495c-9c30-190a16b44581] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dfa1d682-affa-495c-9c30-190a16b44581] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003423416s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-261813 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.055429253s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-261813 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-261813 addons disable ingress-dns --alsologtostderr -v=1: (1.198907879s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-261813 addons disable ingress --alsologtostderr -v=1: (7.724616198s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-261813
helpers_test.go:235: (dbg) docker inspect addons-261813:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90",
	        "Created": "2024-07-30T02:25:57.238468722Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1599468,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-30T02:25:57.370462158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90/hostname",
	        "HostsPath": "/var/lib/docker/containers/8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90/hosts",
	        "LogPath": "/var/lib/docker/containers/8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90/8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90-json.log",
	        "Name": "/addons-261813",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-261813:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-261813",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5d4252b6024d7c7732271e66ddfc0c8aa6cc619be7a19c78bf8584a175a0612d-init/diff:/var/lib/docker/overlay2/acd0679734de498ee4da989a39c292c935753fd7c8a4808d283ba27465852ac6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d4252b6024d7c7732271e66ddfc0c8aa6cc619be7a19c78bf8584a175a0612d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d4252b6024d7c7732271e66ddfc0c8aa6cc619be7a19c78bf8584a175a0612d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d4252b6024d7c7732271e66ddfc0c8aa6cc619be7a19c78bf8584a175a0612d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-261813",
	                "Source": "/var/lib/docker/volumes/addons-261813/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-261813",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-261813",
	                "name.minikube.sigs.k8s.io": "addons-261813",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a5a1267ebe74a11cfab5819a02bafb2c19ce9fedfd6414269bd28c5cfcff0f5",
	            "SandboxKey": "/var/run/docker/netns/2a5a1267ebe7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-261813": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0521bbec070a01539d5e070e1b1fa985de506ae39d48ba4860e79902f78cfc2d",
	                    "EndpointID": "d5cb5c39432fd035122d1b99c8b651245dd2b8032ba2f6eda261cf7f02b8ea6d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-261813",
	                        "8224a32cc06f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-261813 -n addons-261813
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-261813 logs -n 25: (1.376612486s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-888216                                                                     | download-only-888216   | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| start   | --download-only -p                                                                          | download-docker-154092 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | download-docker-154092                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-154092                                                                   | download-docker-154092 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-628859   | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | binary-mirror-628859                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42821                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-628859                                                                     | binary-mirror-628859   | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| addons  | disable dashboard -p                                                                        | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | addons-261813                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | addons-261813                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-261813 --wait=true                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:29 UTC | 30 Jul 24 02:29 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-261813 ip                                                                            | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:29 UTC | 30 Jul 24 02:29 UTC |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:29 UTC | 30 Jul 24 02:29 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:29 UTC | 30 Jul 24 02:29 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:29 UTC | 30 Jul 24 02:29 UTC |
	|         | -p addons-261813                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-261813 ssh cat                                                                       | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | /opt/local-path-provisioner/pvc-c44108cc-c5e9-43dd-8069-916608c7b030_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-261813 addons                                                                        | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-261813 addons                                                                        | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | addons-261813                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | -p addons-261813                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:31 UTC |
	|         | addons-261813                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-261813 ssh curl -s                                                                   | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-261813 ip                                                                            | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:33 UTC | 30 Jul 24 02:33 UTC |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:33 UTC | 30 Jul 24 02:33 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:33 UTC | 30 Jul 24 02:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 02:25:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 02:25:33.267887 1598980 out.go:291] Setting OutFile to fd 1 ...
	I0730 02:25:33.268101 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:25:33.268114 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:25:33.268120 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:25:33.268379 1598980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 02:25:33.268849 1598980 out.go:298] Setting JSON to false
	I0730 02:25:33.269803 1598980 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":86879,"bootTime":1722219454,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0730 02:25:33.269873 1598980 start.go:139] virtualization:  
	I0730 02:25:33.272311 1598980 out.go:177] * [addons-261813] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0730 02:25:33.273952 1598980 out.go:177]   - MINIKUBE_LOCATION=19348
	I0730 02:25:33.274118 1598980 notify.go:220] Checking for updates...
	I0730 02:25:33.277820 1598980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 02:25:33.279485 1598980 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:25:33.281156 1598980 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	I0730 02:25:33.282793 1598980 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0730 02:25:33.284365 1598980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 02:25:33.286188 1598980 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 02:25:33.306398 1598980 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0730 02:25:33.306517 1598980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:25:33.378365 1598980 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-30 02:25:33.369398448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:25:33.378509 1598980 docker.go:307] overlay module found
	I0730 02:25:33.380598 1598980 out.go:177] * Using the docker driver based on user configuration
	I0730 02:25:33.382552 1598980 start.go:297] selected driver: docker
	I0730 02:25:33.382567 1598980 start.go:901] validating driver "docker" against <nil>
	I0730 02:25:33.382582 1598980 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 02:25:33.383222 1598980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:25:33.436494 1598980 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-30 02:25:33.427878111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:25:33.436667 1598980 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 02:25:33.436907 1598980 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 02:25:33.438724 1598980 out.go:177] * Using Docker driver with root privileges
	I0730 02:25:33.440325 1598980 cni.go:84] Creating CNI manager for ""
	I0730 02:25:33.440343 1598980 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0730 02:25:33.440354 1598980 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0730 02:25:33.440434 1598980 start.go:340] cluster config:
	{Name:addons-261813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-261813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:25:33.442385 1598980 out.go:177] * Starting "addons-261813" primary control-plane node in "addons-261813" cluster
	I0730 02:25:33.444063 1598980 cache.go:121] Beginning downloading kic base image for docker with crio
	I0730 02:25:33.445819 1598980 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0730 02:25:33.447460 1598980 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:25:33.447468 1598980 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0730 02:25:33.447512 1598980 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0730 02:25:33.447536 1598980 cache.go:56] Caching tarball of preloaded images
	I0730 02:25:33.447614 1598980 preload.go:172] Found /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0730 02:25:33.447624 1598980 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 02:25:33.448109 1598980 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/config.json ...
	I0730 02:25:33.448137 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/config.json: {Name:mk7399907658f87ccf6a0807cd3f6657d864c095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:25:33.461877 1598980 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0730 02:25:33.461997 1598980 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0730 02:25:33.462035 1598980 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0730 02:25:33.462045 1598980 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0730 02:25:33.462054 1598980 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0730 02:25:33.462060 1598980 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0730 02:25:50.400606 1598980 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0730 02:25:50.400661 1598980 cache.go:194] Successfully downloaded all kic artifacts
	I0730 02:25:50.400704 1598980 start.go:360] acquireMachinesLock for addons-261813: {Name:mk6ed76ff4a7e22da2e04cc04fb41fd5cadc013c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 02:25:50.401451 1598980 start.go:364] duration metric: took 716.899µs to acquireMachinesLock for "addons-261813"
	I0730 02:25:50.401494 1598980 start.go:93] Provisioning new machine with config: &{Name:addons-261813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-261813 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 02:25:50.401585 1598980 start.go:125] createHost starting for "" (driver="docker")
	I0730 02:25:50.403873 1598980 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0730 02:25:50.404184 1598980 start.go:159] libmachine.API.Create for "addons-261813" (driver="docker")
	I0730 02:25:50.404226 1598980 client.go:168] LocalClient.Create starting
	I0730 02:25:50.404345 1598980 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem
	I0730 02:25:50.591006 1598980 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem
	I0730 02:25:50.816896 1598980 cli_runner.go:164] Run: docker network inspect addons-261813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0730 02:25:50.834595 1598980 cli_runner.go:211] docker network inspect addons-261813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0730 02:25:50.834702 1598980 network_create.go:284] running [docker network inspect addons-261813] to gather additional debugging logs...
	I0730 02:25:50.834724 1598980 cli_runner.go:164] Run: docker network inspect addons-261813
	W0730 02:25:50.850774 1598980 cli_runner.go:211] docker network inspect addons-261813 returned with exit code 1
	I0730 02:25:50.850809 1598980 network_create.go:287] error running [docker network inspect addons-261813]: docker network inspect addons-261813: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-261813 not found
	I0730 02:25:50.850822 1598980 network_create.go:289] output of [docker network inspect addons-261813]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-261813 not found
	
	** /stderr **
	I0730 02:25:50.850930 1598980 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0730 02:25:50.866360 1598980 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ec930}
	I0730 02:25:50.866402 1598980 network_create.go:124] attempt to create docker network addons-261813 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0730 02:25:50.866466 1598980 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-261813 addons-261813
	I0730 02:25:50.933600 1598980 network_create.go:108] docker network addons-261813 192.168.49.0/24 created
	I0730 02:25:50.933633 1598980 kic.go:121] calculated static IP "192.168.49.2" for the "addons-261813" container
	I0730 02:25:50.933710 1598980 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0730 02:25:50.949234 1598980 cli_runner.go:164] Run: docker volume create addons-261813 --label name.minikube.sigs.k8s.io=addons-261813 --label created_by.minikube.sigs.k8s.io=true
	I0730 02:25:50.965769 1598980 oci.go:103] Successfully created a docker volume addons-261813
	I0730 02:25:50.965863 1598980 cli_runner.go:164] Run: docker run --rm --name addons-261813-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-261813 --entrypoint /usr/bin/test -v addons-261813:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0730 02:25:52.955911 1598980 cli_runner.go:217] Completed: docker run --rm --name addons-261813-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-261813 --entrypoint /usr/bin/test -v addons-261813:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib: (1.989986661s)
	I0730 02:25:52.955945 1598980 oci.go:107] Successfully prepared a docker volume addons-261813
	I0730 02:25:52.956017 1598980 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:25:52.956042 1598980 kic.go:194] Starting extracting preloaded images to volume ...
	I0730 02:25:52.956139 1598980 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-261813:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0730 02:25:57.172109 1598980 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-261813:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (4.215930181s)
	I0730 02:25:57.172143 1598980 kic.go:203] duration metric: took 4.216095273s to extract preloaded images to volume ...
	W0730 02:25:57.172290 1598980 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0730 02:25:57.172400 1598980 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0730 02:25:57.223692 1598980 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-261813 --name addons-261813 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-261813 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-261813 --network addons-261813 --ip 192.168.49.2 --volume addons-261813:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0730 02:25:57.561044 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Running}}
	I0730 02:25:57.585912 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:25:57.620922 1598980 cli_runner.go:164] Run: docker exec addons-261813 stat /var/lib/dpkg/alternatives/iptables
	I0730 02:25:57.686302 1598980 oci.go:144] the created container "addons-261813" has a running status.
	I0730 02:25:57.686330 1598980 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa...
	I0730 02:25:58.641209 1598980 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0730 02:25:58.663951 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:25:58.691203 1598980 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0730 02:25:58.691224 1598980 kic_runner.go:114] Args: [docker exec --privileged addons-261813 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0730 02:25:58.739189 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:25:58.757904 1598980 machine.go:94] provisionDockerMachine start ...
	I0730 02:25:58.758014 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:25:58.775832 1598980 main.go:141] libmachine: Using SSH client type: native
	I0730 02:25:58.776163 1598980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38883 <nil> <nil>}
	I0730 02:25:58.776179 1598980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 02:25:58.907366 1598980 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-261813
	
	I0730 02:25:58.907410 1598980 ubuntu.go:169] provisioning hostname "addons-261813"
	I0730 02:25:58.907481 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:25:58.924976 1598980 main.go:141] libmachine: Using SSH client type: native
	I0730 02:25:58.925229 1598980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38883 <nil> <nil>}
	I0730 02:25:58.925247 1598980 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-261813 && echo "addons-261813" | sudo tee /etc/hostname
	I0730 02:25:59.076295 1598980 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-261813
	
	I0730 02:25:59.076417 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:25:59.093810 1598980 main.go:141] libmachine: Using SSH client type: native
	I0730 02:25:59.094057 1598980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38883 <nil> <nil>}
	I0730 02:25:59.094079 1598980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-261813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-261813/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-261813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 02:25:59.228058 1598980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 02:25:59.228093 1598980 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19348-1592571/.minikube CaCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19348-1592571/.minikube}
	I0730 02:25:59.228121 1598980 ubuntu.go:177] setting up certificates
	I0730 02:25:59.228131 1598980 provision.go:84] configureAuth start
	I0730 02:25:59.228208 1598980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-261813
	I0730 02:25:59.250015 1598980 provision.go:143] copyHostCerts
	I0730 02:25:59.250102 1598980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem (1078 bytes)
	I0730 02:25:59.250245 1598980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem (1123 bytes)
	I0730 02:25:59.250308 1598980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem (1675 bytes)
	I0730 02:25:59.250357 1598980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem org=jenkins.addons-261813 san=[127.0.0.1 192.168.49.2 addons-261813 localhost minikube]
	I0730 02:26:00.339017 1598980 provision.go:177] copyRemoteCerts
	I0730 02:26:00.339100 1598980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 02:26:00.339148 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:00.360181 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:00.458933 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 02:26:00.484366 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0730 02:26:00.509092 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0730 02:26:00.533025 1598980 provision.go:87] duration metric: took 1.304874627s to configureAuth
	I0730 02:26:00.533108 1598980 ubuntu.go:193] setting minikube options for container-runtime
	I0730 02:26:00.533337 1598980 config.go:182] Loaded profile config "addons-261813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:26:00.533458 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:00.552949 1598980 main.go:141] libmachine: Using SSH client type: native
	I0730 02:26:00.553213 1598980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38883 <nil> <nil>}
	I0730 02:26:00.553236 1598980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 02:26:00.795238 1598980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 02:26:00.795263 1598980 machine.go:97] duration metric: took 2.03733692s to provisionDockerMachine
	I0730 02:26:00.795275 1598980 client.go:171] duration metric: took 10.391033603s to LocalClient.Create
	I0730 02:26:00.795293 1598980 start.go:167] duration metric: took 10.391111197s to libmachine.API.Create "addons-261813"
	I0730 02:26:00.795300 1598980 start.go:293] postStartSetup for "addons-261813" (driver="docker")
	I0730 02:26:00.795311 1598980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 02:26:00.795376 1598980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 02:26:00.795439 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:00.813108 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:00.909155 1598980 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 02:26:00.912272 1598980 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0730 02:26:00.912359 1598980 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0730 02:26:00.912388 1598980 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0730 02:26:00.912410 1598980 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0730 02:26:00.912456 1598980 filesync.go:126] Scanning /home/jenkins/minikube-integration/19348-1592571/.minikube/addons for local assets ...
	I0730 02:26:00.912562 1598980 filesync.go:126] Scanning /home/jenkins/minikube-integration/19348-1592571/.minikube/files for local assets ...
	I0730 02:26:00.912622 1598980 start.go:296] duration metric: took 117.31435ms for postStartSetup
	I0730 02:26:00.913016 1598980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-261813
	I0730 02:26:00.930980 1598980 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/config.json ...
	I0730 02:26:00.931289 1598980 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 02:26:00.931349 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:00.950806 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:01.045455 1598980 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0730 02:26:01.050183 1598980 start.go:128] duration metric: took 10.648581174s to createHost
	I0730 02:26:01.050209 1598980 start.go:83] releasing machines lock for "addons-261813", held for 10.648735469s
	I0730 02:26:01.050284 1598980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-261813
	I0730 02:26:01.066929 1598980 ssh_runner.go:195] Run: cat /version.json
	I0730 02:26:01.066994 1598980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 02:26:01.067089 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:01.066997 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:01.093783 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:01.101572 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:01.368278 1598980 ssh_runner.go:195] Run: systemctl --version
	I0730 02:26:01.372848 1598980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 02:26:01.517465 1598980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0730 02:26:01.521974 1598980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 02:26:01.545656 1598980 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0730 02:26:01.545762 1598980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 02:26:01.581879 1598980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0730 02:26:01.581916 1598980 start.go:495] detecting cgroup driver to use...
	I0730 02:26:01.581952 1598980 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0730 02:26:01.582019 1598980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 02:26:01.599097 1598980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 02:26:01.612002 1598980 docker.go:217] disabling cri-docker service (if available) ...
	I0730 02:26:01.612076 1598980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 02:26:01.628294 1598980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 02:26:01.644915 1598980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 02:26:01.741736 1598980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 02:26:01.840061 1598980 docker.go:233] disabling docker service ...
	I0730 02:26:01.840139 1598980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 02:26:01.860581 1598980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 02:26:01.874213 1598980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 02:26:01.970821 1598980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 02:26:02.078050 1598980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 02:26:02.090982 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 02:26:02.107904 1598980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 02:26:02.107992 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.119230 1598980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 02:26:02.119317 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.129958 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.139748 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.150179 1598980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 02:26:02.159605 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.169630 1598980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.185874 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.196187 1598980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 02:26:02.205121 1598980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 02:26:02.213536 1598980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:26:02.302719 1598980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 02:26:02.426250 1598980 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 02:26:02.426334 1598980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 02:26:02.430009 1598980 start.go:563] Will wait 60s for crictl version
	I0730 02:26:02.430072 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:26:02.433710 1598980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 02:26:02.473009 1598980 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0730 02:26:02.473118 1598980 ssh_runner.go:195] Run: crio --version
	I0730 02:26:02.509905 1598980 ssh_runner.go:195] Run: crio --version
	I0730 02:26:02.552156 1598980 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0730 02:26:02.553855 1598980 cli_runner.go:164] Run: docker network inspect addons-261813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0730 02:26:02.571690 1598980 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0730 02:26:02.575117 1598980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 02:26:02.585641 1598980 kubeadm.go:883] updating cluster {Name:addons-261813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-261813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 02:26:02.585788 1598980 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:26:02.585855 1598980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 02:26:02.658019 1598980 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 02:26:02.658043 1598980 crio.go:433] Images already preloaded, skipping extraction
	I0730 02:26:02.658103 1598980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 02:26:02.695842 1598980 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 02:26:02.695865 1598980 cache_images.go:84] Images are preloaded, skipping loading
	I0730 02:26:02.695874 1598980 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0730 02:26:02.695991 1598980 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-261813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-261813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 02:26:02.696109 1598980 ssh_runner.go:195] Run: crio config
	I0730 02:26:02.761096 1598980 cni.go:84] Creating CNI manager for ""
	I0730 02:26:02.761116 1598980 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0730 02:26:02.761125 1598980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 02:26:02.761154 1598980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-261813 NodeName:addons-261813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 02:26:02.761354 1598980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-261813"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 02:26:02.761448 1598980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 02:26:02.771112 1598980 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 02:26:02.771207 1598980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0730 02:26:02.781369 1598980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0730 02:26:02.799173 1598980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 02:26:02.816985 1598980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0730 02:26:02.834750 1598980 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0730 02:26:02.838241 1598980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 02:26:02.848846 1598980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:26:02.930283 1598980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 02:26:02.943888 1598980 certs.go:68] Setting up /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813 for IP: 192.168.49.2
	I0730 02:26:02.943911 1598980 certs.go:194] generating shared ca certs ...
	I0730 02:26:02.943928 1598980 certs.go:226] acquiring lock for ca certs: {Name:mkd188f515cf1f581cef2c6a3cc946da59d73d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:02.944645 1598980 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key
	I0730 02:26:03.109734 1598980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt ...
	I0730 02:26:03.109768 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt: {Name:mkd023154b3e5573ffc40cf3fc0f85147ef040f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.110888 1598980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key ...
	I0730 02:26:03.110906 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key: {Name:mk5d98b96345e33944ba25ec706238280b86654e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.111717 1598980 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key
	I0730 02:26:03.221829 1598980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt ...
	I0730 02:26:03.221864 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt: {Name:mk1d57f51e38294f831619c296cd8e2d620e0692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.222066 1598980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key ...
	I0730 02:26:03.222079 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key: {Name:mkf385c7237e9a43047376f3104a113068da5114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.222645 1598980 certs.go:256] generating profile certs ...
	I0730 02:26:03.222711 1598980 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.key
	I0730 02:26:03.222730 1598980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt with IP's: []
	I0730 02:26:03.644519 1598980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt ...
	I0730 02:26:03.644556 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: {Name:mkf3448e81372a1b80d94e46e809da85032a15b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.645327 1598980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.key ...
	I0730 02:26:03.645346 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.key: {Name:mk7c8cf1b9e9a92c64baad47c58ea1361cea285d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.645450 1598980 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key.76b43c53
	I0730 02:26:03.645472 1598980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt.76b43c53 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0730 02:26:04.098132 1598980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt.76b43c53 ...
	I0730 02:26:04.098169 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt.76b43c53: {Name:mk88d85e52611170516cc4f640305dea7464276f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:04.098379 1598980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key.76b43c53 ...
	I0730 02:26:04.098399 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key.76b43c53: {Name:mk2e7ad47416e67b41dbc11e0500d74bc0af2676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:04.098523 1598980 certs.go:381] copying /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt.76b43c53 -> /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt
	I0730 02:26:04.098614 1598980 certs.go:385] copying /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key.76b43c53 -> /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key
	I0730 02:26:04.098723 1598980 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.key
	I0730 02:26:04.098745 1598980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.crt with IP's: []
	I0730 02:26:04.618955 1598980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.crt ...
	I0730 02:26:04.618989 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.crt: {Name:mk424ecb4f85199cb0be767926743e984c77f8eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:04.619273 1598980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.key ...
	I0730 02:26:04.619291 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.key: {Name:mke95e26c260749605364ba06ec1e8050f03ffbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:04.620254 1598980 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 02:26:04.620315 1598980 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem (1078 bytes)
	I0730 02:26:04.620344 1598980 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem (1123 bytes)
	I0730 02:26:04.620373 1598980 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem (1675 bytes)
	I0730 02:26:04.621059 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 02:26:04.646978 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0730 02:26:04.670894 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 02:26:04.695105 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0730 02:26:04.719505 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0730 02:26:04.744249 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 02:26:04.768188 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 02:26:04.793219 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0730 02:26:04.817137 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 02:26:04.841521 1598980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 02:26:04.859576 1598980 ssh_runner.go:195] Run: openssl version
	I0730 02:26:04.865123 1598980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 02:26:04.874527 1598980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:26:04.878096 1598980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:26:04.878202 1598980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:26:04.885142 1598980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 02:26:04.894756 1598980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 02:26:04.898304 1598980 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 02:26:04.898355 1598980 kubeadm.go:392] StartCluster: {Name:addons-261813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-261813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:26:04.898434 1598980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 02:26:04.898656 1598980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 02:26:04.939207 1598980 cri.go:89] found id: ""
	I0730 02:26:04.939291 1598980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0730 02:26:04.947992 1598980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0730 02:26:04.956588 1598980 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0730 02:26:04.956678 1598980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0730 02:26:04.967162 1598980 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0730 02:26:04.967190 1598980 kubeadm.go:157] found existing configuration files:
	
	I0730 02:26:04.967244 1598980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0730 02:26:04.975754 1598980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0730 02:26:04.975865 1598980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0730 02:26:04.984299 1598980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0730 02:26:04.992671 1598980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0730 02:26:04.992760 1598980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0730 02:26:05.001992 1598980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0730 02:26:05.013100 1598980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0730 02:26:05.013176 1598980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0730 02:26:05.023217 1598980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0730 02:26:05.032884 1598980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0730 02:26:05.032956 1598980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0730 02:26:05.041916 1598980 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0730 02:26:05.088384 1598980 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0730 02:26:05.088792 1598980 kubeadm.go:310] [preflight] Running pre-flight checks
	I0730 02:26:05.128720 1598980 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0730 02:26:05.128859 1598980 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1065-aws
	I0730 02:26:05.128913 1598980 kubeadm.go:310] OS: Linux
	I0730 02:26:05.128986 1598980 kubeadm.go:310] CGROUPS_CPU: enabled
	I0730 02:26:05.129068 1598980 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0730 02:26:05.129140 1598980 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0730 02:26:05.129217 1598980 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0730 02:26:05.129292 1598980 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0730 02:26:05.129416 1598980 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0730 02:26:05.129495 1598980 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0730 02:26:05.129576 1598980 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0730 02:26:05.129653 1598980 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0730 02:26:05.196323 1598980 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0730 02:26:05.196587 1598980 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0730 02:26:05.196722 1598980 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0730 02:26:05.448528 1598980 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0730 02:26:05.451749 1598980 out.go:204]   - Generating certificates and keys ...
	I0730 02:26:05.451944 1598980 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0730 02:26:05.452084 1598980 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0730 02:26:05.881971 1598980 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0730 02:26:06.395712 1598980 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0730 02:26:06.652234 1598980 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0730 02:26:07.301158 1598980 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0730 02:26:08.666808 1598980 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0730 02:26:08.667128 1598980 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-261813 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0730 02:26:09.621643 1598980 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0730 02:26:09.621931 1598980 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-261813 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0730 02:26:09.968295 1598980 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0730 02:26:10.630071 1598980 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0730 02:26:10.826805 1598980 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0730 02:26:10.827056 1598980 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0730 02:26:11.370118 1598980 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0730 02:26:11.894074 1598980 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0730 02:26:12.119229 1598980 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0730 02:26:12.370498 1598980 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0730 02:26:12.798192 1598980 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0730 02:26:12.798772 1598980 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0730 02:26:12.801768 1598980 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0730 02:26:12.804051 1598980 out.go:204]   - Booting up control plane ...
	I0730 02:26:12.804170 1598980 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0730 02:26:12.804254 1598980 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0730 02:26:12.805066 1598980 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0730 02:26:12.830409 1598980 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0730 02:26:12.832317 1598980 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0730 02:26:12.832390 1598980 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0730 02:26:12.924871 1598980 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0730 02:26:12.924959 1598980 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0730 02:26:13.925581 1598980 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000801637s
	I0730 02:26:13.925691 1598980 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0730 02:26:19.927861 1598980 kubeadm.go:310] [api-check] The API server is healthy after 6.002270246s
	I0730 02:26:19.946595 1598980 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0730 02:26:19.960246 1598980 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0730 02:26:19.983539 1598980 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0730 02:26:19.983736 1598980 kubeadm.go:310] [mark-control-plane] Marking the node addons-261813 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0730 02:26:19.995037 1598980 kubeadm.go:310] [bootstrap-token] Using token: zfd3n7.au06qsgfsxnz77lt
	I0730 02:26:19.997119 1598980 out.go:204]   - Configuring RBAC rules ...
	I0730 02:26:19.997282 1598980 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0730 02:26:20.014823 1598980 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0730 02:26:20.026266 1598980 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0730 02:26:20.032700 1598980 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0730 02:26:20.038942 1598980 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0730 02:26:20.043190 1598980 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0730 02:26:20.334919 1598980 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0730 02:26:20.763790 1598980 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0730 02:26:21.334325 1598980 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0730 02:26:21.335683 1598980 kubeadm.go:310] 
	I0730 02:26:21.335773 1598980 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0730 02:26:21.335785 1598980 kubeadm.go:310] 
	I0730 02:26:21.335887 1598980 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0730 02:26:21.335896 1598980 kubeadm.go:310] 
	I0730 02:26:21.335927 1598980 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0730 02:26:21.336025 1598980 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0730 02:26:21.336101 1598980 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0730 02:26:21.336115 1598980 kubeadm.go:310] 
	I0730 02:26:21.336176 1598980 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0730 02:26:21.336182 1598980 kubeadm.go:310] 
	I0730 02:26:21.336255 1598980 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0730 02:26:21.336283 1598980 kubeadm.go:310] 
	I0730 02:26:21.336345 1598980 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0730 02:26:21.336453 1598980 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0730 02:26:21.336537 1598980 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0730 02:26:21.336565 1598980 kubeadm.go:310] 
	I0730 02:26:21.336742 1598980 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0730 02:26:21.336828 1598980 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0730 02:26:21.336840 1598980 kubeadm.go:310] 
	I0730 02:26:21.336928 1598980 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zfd3n7.au06qsgfsxnz77lt \
	I0730 02:26:21.337048 1598980 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57de0a3c7c2240aa1874003464848c868dfbdf86454d09acc1d3772ff2d3bc49 \
	I0730 02:26:21.337097 1598980 kubeadm.go:310] 	--control-plane 
	I0730 02:26:21.337106 1598980 kubeadm.go:310] 
	I0730 02:26:21.337202 1598980 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0730 02:26:21.337217 1598980 kubeadm.go:310] 
	I0730 02:26:21.337319 1598980 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zfd3n7.au06qsgfsxnz77lt \
	I0730 02:26:21.337449 1598980 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57de0a3c7c2240aa1874003464848c868dfbdf86454d09acc1d3772ff2d3bc49 
	I0730 02:26:21.341662 1598980 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1065-aws\n", err: exit status 1
	I0730 02:26:21.341780 1598980 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0730 02:26:21.341802 1598980 cni.go:84] Creating CNI manager for ""
	I0730 02:26:21.341810 1598980 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0730 02:26:21.344966 1598980 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0730 02:26:21.346611 1598980 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0730 02:26:21.350859 1598980 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0730 02:26:21.350883 1598980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0730 02:26:21.369409 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0730 02:26:21.633648 1598980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0730 02:26:21.633787 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:21.633876 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-261813 minikube.k8s.io/updated_at=2024_07_30T02_26_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9 minikube.k8s.io/name=addons-261813 minikube.k8s.io/primary=true
	I0730 02:26:21.778549 1598980 ops.go:34] apiserver oom_adj: -16
	I0730 02:26:21.778647 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:22.278809 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:22.778823 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:23.279055 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:23.779028 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:24.279326 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:24.779595 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:25.279518 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:25.778804 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:26.278878 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:26.779228 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:27.278839 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:27.779389 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:28.279347 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:28.779421 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:29.279568 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:29.779284 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:30.279362 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:30.778891 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:31.279272 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:31.779245 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:32.278841 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:32.778762 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:33.279503 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:33.779609 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:34.279502 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:34.364065 1598980 kubeadm.go:1113] duration metric: took 12.730327479s to wait for elevateKubeSystemPrivileges
	I0730 02:26:34.364106 1598980 kubeadm.go:394] duration metric: took 29.465756511s to StartCluster
	I0730 02:26:34.364124 1598980 settings.go:142] acquiring lock: {Name:mk63e25bcb01770839277a929f9ba49ce5be4445 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:34.364759 1598980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:26:34.365154 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/kubeconfig: {Name:mk572b463a11a946de92ccc491c42330cd76de64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:34.365353 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0730 02:26:34.365381 1598980 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 02:26:34.365620 1598980 config.go:182] Loaded profile config "addons-261813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:26:34.365650 1598980 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0730 02:26:34.365732 1598980 addons.go:69] Setting yakd=true in profile "addons-261813"
	I0730 02:26:34.365761 1598980 addons.go:234] Setting addon yakd=true in "addons-261813"
	I0730 02:26:34.365772 1598980 addons.go:69] Setting inspektor-gadget=true in profile "addons-261813"
	I0730 02:26:34.365787 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.365795 1598980 addons.go:234] Setting addon inspektor-gadget=true in "addons-261813"
	I0730 02:26:34.365833 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.366207 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.366323 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.366630 1598980 addons.go:69] Setting metrics-server=true in profile "addons-261813"
	I0730 02:26:34.366660 1598980 addons.go:234] Setting addon metrics-server=true in "addons-261813"
	I0730 02:26:34.366685 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.367057 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.369073 1598980 addons.go:69] Setting cloud-spanner=true in profile "addons-261813"
	I0730 02:26:34.369331 1598980 addons.go:234] Setting addon cloud-spanner=true in "addons-261813"
	I0730 02:26:34.369386 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.369558 1598980 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-261813"
	I0730 02:26:34.369580 1598980 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-261813"
	I0730 02:26:34.369598 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.369962 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.370597 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.369222 1598980 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-261813"
	I0730 02:26:34.369232 1598980 addons.go:69] Setting default-storageclass=true in profile "addons-261813"
	I0730 02:26:34.369246 1598980 addons.go:69] Setting gcp-auth=true in profile "addons-261813"
	I0730 02:26:34.369253 1598980 addons.go:69] Setting ingress=true in profile "addons-261813"
	I0730 02:26:34.369259 1598980 addons.go:69] Setting ingress-dns=true in profile "addons-261813"
	I0730 02:26:34.370769 1598980 out.go:177] * Verifying Kubernetes components...
	I0730 02:26:34.371070 1598980 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-261813"
	I0730 02:26:34.371083 1598980 addons.go:69] Setting registry=true in profile "addons-261813"
	I0730 02:26:34.371092 1598980 addons.go:69] Setting storage-provisioner=true in profile "addons-261813"
	I0730 02:26:34.371099 1598980 addons.go:69] Setting volcano=true in profile "addons-261813"
	I0730 02:26:34.371106 1598980 addons.go:69] Setting volumesnapshots=true in profile "addons-261813"
	I0730 02:26:34.373035 1598980 addons.go:234] Setting addon volumesnapshots=true in "addons-261813"
	I0730 02:26:34.373086 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.373517 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.378274 1598980 addons.go:234] Setting addon ingress-dns=true in "addons-261813"
	I0730 02:26:34.378335 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.378751 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.396965 1598980 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-261813"
	I0730 02:26:34.397446 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.406007 1598980 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-261813"
	I0730 02:26:34.406059 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.406483 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.412913 1598980 addons.go:234] Setting addon registry=true in "addons-261813"
	I0730 02:26:34.412973 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.413425 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.426662 1598980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-261813"
	I0730 02:26:34.426993 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.436078 1598980 addons.go:234] Setting addon storage-provisioner=true in "addons-261813"
	I0730 02:26:34.436138 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.436666 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.448219 1598980 addons.go:234] Setting addon volcano=true in "addons-261813"
	I0730 02:26:34.448291 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.456186 1598980 mustload.go:65] Loading cluster: addons-261813
	I0730 02:26:34.456378 1598980 config.go:182] Loaded profile config "addons-261813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:26:34.456647 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.487793 1598980 addons.go:234] Setting addon ingress=true in "addons-261813"
	I0730 02:26:34.487860 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.488388 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.509749 1598980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:26:34.511114 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.556587 1598980 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0730 02:26:34.559212 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0730 02:26:34.559285 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0730 02:26:34.559384 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.559561 1598980 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 02:26:34.560640 1598980 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0730 02:26:34.561436 1598980 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 02:26:34.561480 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0730 02:26:34.561576 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.592230 1598980 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0730 02:26:34.592295 1598980 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0730 02:26:34.592394 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.593271 1598980 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0730 02:26:34.596733 1598980 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0730 02:26:34.596790 1598980 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0730 02:26:34.596903 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.614666 1598980 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0730 02:26:34.614806 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0730 02:26:34.614812 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0730 02:26:34.617960 1598980 addons.go:234] Setting addon default-storageclass=true in "addons-261813"
	I0730 02:26:34.622372 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.622834 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.644201 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0730 02:26:34.644271 1598980 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0730 02:26:34.644374 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.655181 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.661970 1598980 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-261813"
	I0730 02:26:34.662019 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.672065 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.701686 1598980 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0730 02:26:34.704172 1598980 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0730 02:26:34.704194 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0730 02:26:34.704274 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.707706 1598980 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0730 02:26:34.709729 1598980 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0730 02:26:34.709751 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0730 02:26:34.709827 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.712863 1598980 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0730 02:26:34.712919 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0730 02:26:34.713017 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.717124 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0730 02:26:34.717552 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.722336 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0730 02:26:34.723124 1598980 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0730 02:26:34.723141 1598980 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0730 02:26:34.723305 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	W0730 02:26:34.744379 1598980 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0730 02:26:34.754588 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0730 02:26:34.757858 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0730 02:26:34.764648 1598980 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0730 02:26:34.772126 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.775688 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0730 02:26:34.775810 1598980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0730 02:26:34.778219 1598980 out.go:177]   - Using image docker.io/registry:2.8.3
	I0730 02:26:34.781394 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0730 02:26:34.781539 1598980 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0730 02:26:34.781557 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0730 02:26:34.781623 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.787025 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0730 02:26:34.801389 1598980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 02:26:34.803180 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0730 02:26:34.811575 1598980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 02:26:34.830517 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0730 02:26:34.830541 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0730 02:26:34.830622 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.836035 1598980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 02:26:34.841762 1598980 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0730 02:26:34.842070 1598980 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0730 02:26:34.842083 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0730 02:26:34.842147 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.845813 1598980 out.go:177]   - Using image docker.io/busybox:stable
	I0730 02:26:34.850831 1598980 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0730 02:26:34.850855 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0730 02:26:34.850920 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.883633 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.910698 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.914336 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.914346 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.921978 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.949899 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.960543 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.985945 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.986848 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:35.019652 1598980 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0730 02:26:35.019687 1598980 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0730 02:26:35.021681 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:35.028076 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	W0730 02:26:35.032165 1598980 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0730 02:26:35.032201 1598980 retry.go:31] will retry after 319.000514ms: ssh: handshake failed: EOF
	I0730 02:26:35.135253 1598980 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0730 02:26:35.135276 1598980 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0730 02:26:35.174090 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0730 02:26:35.174166 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0730 02:26:35.265055 1598980 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0730 02:26:35.265074 1598980 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0730 02:26:35.288844 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0730 02:26:35.288863 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0730 02:26:35.314589 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0730 02:26:35.344768 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0730 02:26:35.364894 1598980 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0730 02:26:35.364969 1598980 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0730 02:26:35.410834 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 02:26:35.414097 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0730 02:26:35.422992 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0730 02:26:35.423064 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0730 02:26:35.481727 1598980 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0730 02:26:35.481811 1598980 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0730 02:26:35.498615 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0730 02:26:35.504580 1598980 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0730 02:26:35.504650 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0730 02:26:35.523763 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0730 02:26:35.523839 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0730 02:26:35.524689 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0730 02:26:35.529147 1598980 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0730 02:26:35.529210 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0730 02:26:35.571801 1598980 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0730 02:26:35.571878 1598980 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0730 02:26:35.590150 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0730 02:26:35.590229 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0730 02:26:35.626343 1598980 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0730 02:26:35.626419 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0730 02:26:35.654222 1598980 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0730 02:26:35.654297 1598980 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0730 02:26:35.716490 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0730 02:26:35.719689 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0730 02:26:35.719760 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0730 02:26:35.722600 1598980 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0730 02:26:35.722663 1598980 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0730 02:26:35.793234 1598980 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0730 02:26:35.793310 1598980 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0730 02:26:35.798389 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0730 02:26:35.817023 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0730 02:26:35.817098 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0730 02:26:35.857539 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0730 02:26:35.857619 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0730 02:26:35.869584 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0730 02:26:35.903706 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0730 02:26:35.903781 1598980 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0730 02:26:35.971403 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0730 02:26:35.991836 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0730 02:26:35.991913 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0730 02:26:36.041892 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0730 02:26:36.041965 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0730 02:26:36.047485 1598980 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 02:26:36.047568 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0730 02:26:36.122753 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0730 02:26:36.122834 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0730 02:26:36.164656 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0730 02:26:36.174671 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 02:26:36.233342 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0730 02:26:36.233429 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0730 02:26:36.334649 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0730 02:26:36.334725 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0730 02:26:36.418674 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0730 02:26:36.418746 1598980 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0730 02:26:36.505420 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0730 02:26:36.505493 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0730 02:26:36.576870 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0730 02:26:36.576958 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0730 02:26:36.761236 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0730 02:26:36.761311 1598980 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0730 02:26:36.951448 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0730 02:26:37.305865 1598980 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.588702064s)
	I0730 02:26:37.305958 1598980 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0730 02:26:37.306269 1598980 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.476012543s)
	I0730 02:26:37.308092 1598980 node_ready.go:35] waiting up to 6m0s for node "addons-261813" to be "Ready" ...
	I0730 02:26:38.021763 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.707063475s)
	I0730 02:26:38.276471 1598980 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-261813" context rescaled to 1 replicas
	I0730 02:26:38.701205 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.356317324s)
	I0730 02:26:39.351905 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:39.734875 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.323970155s)
	I0730 02:26:39.734942 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.320791008s)
	I0730 02:26:39.734969 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.236288917s)
	I0730 02:26:41.229310 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.704550314s)
	I0730 02:26:41.229346 1598980 addons.go:475] Verifying addon ingress=true in "addons-261813"
	I0730 02:26:41.229552 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.512995673s)
	I0730 02:26:41.229888 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.4314258s)
	I0730 02:26:41.229914 1598980 addons.go:475] Verifying addon registry=true in "addons-261813"
	I0730 02:26:41.230021 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.360367555s)
	I0730 02:26:41.230102 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.258629094s)
	I0730 02:26:41.230413 1598980 addons.go:475] Verifying addon metrics-server=true in "addons-261813"
	I0730 02:26:41.230157 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.06542772s)
	I0730 02:26:41.232392 1598980 out.go:177] * Verifying registry addon...
	I0730 02:26:41.232450 1598980 out.go:177] * Verifying ingress addon...
	I0730 02:26:41.232468 1598980 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-261813 service yakd-dashboard -n yakd-dashboard
	
	I0730 02:26:41.235929 1598980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0730 02:26:41.236848 1598980 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0730 02:26:41.254165 1598980 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0730 02:26:41.254197 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:41.254872 1598980 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0730 02:26:41.254886 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:41.361161 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.186396901s)
	W0730 02:26:41.361208 1598980 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0730 02:26:41.361229 1598980 retry.go:31] will retry after 148.504854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0730 02:26:41.510640 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 02:26:41.645411 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.69386487s)
	I0730 02:26:41.645450 1598980 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-261813"
	I0730 02:26:41.647620 1598980 out.go:177] * Verifying csi-hostpath-driver addon...
	I0730 02:26:41.650365 1598980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0730 02:26:41.662999 1598980 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0730 02:26:41.663026 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:41.742167 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:41.753318 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:41.814737 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:42.155157 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:42.243472 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:42.243927 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:42.654593 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:42.741119 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:42.742254 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:43.155065 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:43.241955 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:43.242852 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:43.657733 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:43.721297 1598980 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0730 02:26:43.721455 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:43.745898 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:43.746103 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:43.757789 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:43.912642 1598980 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0730 02:26:43.941070 1598980 addons.go:234] Setting addon gcp-auth=true in "addons-261813"
	I0730 02:26:43.941132 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:43.941636 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:43.962219 1598980 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0730 02:26:43.962278 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:44.002665 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:44.155506 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:44.246173 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:44.247218 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:44.312346 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:44.505638 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.99494461s)
	I0730 02:26:44.508954 1598980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 02:26:44.511656 1598980 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0730 02:26:44.514964 1598980 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0730 02:26:44.514988 1598980 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0730 02:26:44.559536 1598980 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0730 02:26:44.559559 1598980 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0730 02:26:44.585829 1598980 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0730 02:26:44.585857 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0730 02:26:44.605992 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0730 02:26:44.655298 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:44.742351 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:44.742997 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:45.171295 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:45.246429 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:45.249090 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:45.378343 1598980 addons.go:475] Verifying addon gcp-auth=true in "addons-261813"
	I0730 02:26:45.381252 1598980 out.go:177] * Verifying gcp-auth addon...
	I0730 02:26:45.384788 1598980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0730 02:26:45.426843 1598980 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0730 02:26:45.426919 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:45.656025 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:45.740233 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:45.744010 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:45.888227 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:46.160697 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:46.240706 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:46.241467 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:46.316976 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:46.388922 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:46.655177 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:46.743217 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:46.746892 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:46.890153 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:47.155768 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:47.242156 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:47.243819 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:47.388320 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:47.655413 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:47.741289 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:47.742091 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:47.894664 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:48.155107 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:48.240387 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:48.244582 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:48.390561 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:48.659197 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:48.741514 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:48.742244 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:48.811221 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:48.893450 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:49.154818 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:49.240842 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:49.241761 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:49.388473 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:49.654931 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:49.742071 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:49.742148 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:49.892653 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:50.154665 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:50.239916 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:50.240533 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:50.395048 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:50.655210 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:50.740371 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:50.741455 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:50.812394 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:50.889627 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:51.154810 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:51.240694 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:51.243874 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:51.389132 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:51.654924 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:51.740902 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:51.741270 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:51.888836 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:52.154899 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:52.242144 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:52.248029 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:52.389261 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:52.655006 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:52.741291 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:52.741556 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:52.888902 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:53.155004 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:53.241854 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:53.245743 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:53.312154 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:53.388628 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:53.654530 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:53.741324 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:53.742037 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:53.888409 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:54.155224 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:54.241357 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:54.242091 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:54.388534 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:54.654254 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:54.740008 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:54.741655 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:54.888363 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:55.155061 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:55.240410 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:55.241134 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:55.388521 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:55.654530 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:55.741549 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:55.742133 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:55.811855 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:55.887983 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:56.154599 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:56.241195 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:56.241516 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:56.388790 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:56.655498 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:56.740863 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:56.742044 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:56.888669 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:57.154542 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:57.240944 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:57.241734 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:57.388509 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:57.655296 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:57.740691 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:57.741780 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:57.888437 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:58.154659 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:58.240472 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:58.241417 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:58.312202 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:58.388575 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:58.654842 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:58.740081 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:58.740860 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:58.888114 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:59.154979 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:59.241367 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:59.241577 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:59.388501 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:59.654757 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:59.741178 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:59.741736 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:59.888299 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:00.160281 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:00.241426 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:00.241932 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:00.312260 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:00.388669 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:00.654940 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:00.739531 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:00.741083 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:00.889266 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:01.154769 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:01.241018 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:01.241979 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:01.388296 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:01.655156 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:01.741343 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:01.742293 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:01.890266 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:02.154257 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:02.240131 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:02.241991 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:02.388413 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:02.655214 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:02.740098 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:02.741421 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:02.811809 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:02.888669 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:03.155400 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:03.240829 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:03.241180 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:03.387921 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:03.655018 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:03.741073 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:03.741687 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:03.888772 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:04.154686 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:04.241383 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:04.241645 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:04.388184 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:04.655331 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:04.741102 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:04.741432 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:04.888730 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:05.155110 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:05.240513 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:05.241526 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:05.311644 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:05.388795 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:05.654487 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:05.741698 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:05.742241 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:05.888671 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:06.154903 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:06.240003 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:06.242452 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:06.388811 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:06.654473 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:06.741736 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:06.742097 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:06.888462 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:07.155451 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:07.241836 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:07.245696 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:07.312063 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:07.388275 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:07.655678 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:07.742784 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:07.743855 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:07.888198 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:08.155445 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:08.249204 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:08.250185 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:08.388354 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:08.655101 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:08.740568 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:08.741241 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:08.888528 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:09.154744 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:09.242011 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:09.242248 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:09.388488 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:09.654631 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:09.740822 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:09.741412 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:09.811060 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:09.888793 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:10.155786 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:10.240930 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:10.242231 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:10.388536 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:10.654299 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:10.740750 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:10.741413 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:10.888238 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:11.155341 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:11.240221 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:11.242065 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:11.388104 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:11.654295 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:11.740483 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:11.744584 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:11.811478 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:11.888599 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:12.154414 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:12.242948 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:12.243864 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:12.389227 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:12.654350 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:12.740500 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:12.741056 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:12.888213 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:13.154757 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:13.240631 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:13.241046 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:13.388887 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:13.654748 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:13.741125 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:13.741982 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:13.888807 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:14.154717 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:14.240157 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:14.241451 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:14.311331 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:14.388392 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:14.654520 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:14.741042 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:14.741722 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:14.888625 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:15.154914 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:15.239721 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:15.241189 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:15.388410 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:15.654785 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:15.740239 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:15.740973 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:15.888129 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:16.155116 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:16.241810 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:16.242560 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:16.388187 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:16.655017 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:16.739612 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:16.741014 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:16.810753 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:16.887753 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:17.154629 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:17.240909 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:17.241477 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:17.387783 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:17.654924 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:17.739812 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:17.740511 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:17.888863 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:18.154827 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:18.240659 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:18.241469 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:18.387853 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:18.654449 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:18.739358 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:18.740795 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:18.812083 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:18.888880 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:19.155556 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:19.240708 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:19.241652 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:19.388799 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:19.654773 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:19.740147 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:19.741895 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:19.888881 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:20.155114 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:20.240490 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:20.240732 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:20.388624 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:20.654846 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:20.740778 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:20.741693 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:20.813524 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:20.888286 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:21.155319 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:21.240375 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:21.241328 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:21.388316 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:21.660002 1598980 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0730 02:27:21.660031 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:21.766128 1598980 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0730 02:27:21.766162 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:21.766907 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:21.882741 1598980 node_ready.go:49] node "addons-261813" has status "Ready":"True"
	I0730 02:27:21.882766 1598980 node_ready.go:38] duration metric: took 44.574620375s for node "addons-261813" to be "Ready" ...
	I0730 02:27:21.882778 1598980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 02:27:21.933122 1598980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-l22tb" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:21.934634 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:22.161635 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:22.258272 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:22.271888 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:22.414602 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:22.656072 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:22.748943 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:22.757200 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:22.889287 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:23.157817 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:23.241940 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:23.246380 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:23.392013 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:23.443908 1598980 pod_ready.go:92] pod "coredns-7db6d8ff4d-l22tb" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.443989 1598980 pod_ready.go:81] duration metric: took 1.510831198s for pod "coredns-7db6d8ff4d-l22tb" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.444030 1598980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.457526 1598980 pod_ready.go:92] pod "etcd-addons-261813" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.457595 1598980 pod_ready.go:81] duration metric: took 13.53433ms for pod "etcd-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.457626 1598980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.464891 1598980 pod_ready.go:92] pod "kube-apiserver-addons-261813" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.464963 1598980 pod_ready.go:81] duration metric: took 7.316736ms for pod "kube-apiserver-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.464990 1598980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.472032 1598980 pod_ready.go:92] pod "kube-controller-manager-addons-261813" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.472097 1598980 pod_ready.go:81] duration metric: took 7.087014ms for pod "kube-controller-manager-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.472125 1598980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s88xb" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.477220 1598980 pod_ready.go:92] pod "kube-proxy-s88xb" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.477288 1598980 pod_ready.go:81] duration metric: took 5.142451ms for pod "kube-proxy-s88xb" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.477315 1598980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.661093 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:23.742737 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:23.744105 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:23.839325 1598980 pod_ready.go:92] pod "kube-scheduler-addons-261813" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.839352 1598980 pod_ready.go:81] duration metric: took 362.016377ms for pod "kube-scheduler-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.839365 1598980 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.888306 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:24.156661 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:24.240401 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:24.243128 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:24.388528 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:24.656801 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:24.740315 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:24.742032 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:24.888847 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:25.157076 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:25.242386 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:25.244240 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:25.389176 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:25.656749 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:25.742045 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:25.744956 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:25.845989 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:25.889402 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:26.157092 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:26.244564 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:26.246632 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:26.389597 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:26.657783 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:26.745945 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:26.748693 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:26.889630 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:27.157559 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:27.244068 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:27.247920 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:27.389480 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:27.658429 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:27.745974 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:27.747399 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:27.846796 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:27.889568 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:28.157233 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:28.256164 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:28.258455 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:28.390020 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:28.656811 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:28.743524 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:28.744910 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:28.888904 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:29.157087 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:29.242574 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:29.245483 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:29.392135 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:29.658808 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:29.744380 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:29.748074 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:29.888784 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:30.156897 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:30.241788 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:30.244997 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:30.347187 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:30.388875 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:30.656369 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:30.742474 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:30.746184 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:30.889008 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:31.158484 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:31.242096 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:31.243646 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:31.388944 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:31.664838 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:31.741677 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:31.742921 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:31.890234 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:32.157100 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:32.242736 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:32.243914 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:32.390099 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:32.656816 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:32.745047 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:32.748330 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:32.847453 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:32.889014 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:33.155872 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:33.249118 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:33.252153 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:33.389975 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:33.657687 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:33.744596 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:33.745819 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:33.888944 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:34.156311 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:34.241262 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:34.242873 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:34.388925 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:34.655527 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:34.745476 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:34.747158 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:34.847656 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:34.888860 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:35.159849 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:35.242704 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:35.245407 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:35.392359 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:35.656767 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:35.743360 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:35.746254 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:35.888257 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:36.155499 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:36.246078 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:36.246866 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:36.388316 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:36.665716 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:36.744135 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:36.744735 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:36.889456 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:37.156637 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:37.245774 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:37.247625 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:37.345513 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:37.389041 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:37.656291 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:37.741618 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:37.744451 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:37.888783 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:38.156459 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:38.241586 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:38.242555 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:38.388898 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:38.657059 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:38.743653 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:38.744702 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:38.889734 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:39.157151 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:39.241523 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:39.244735 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:39.348058 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:39.388460 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:39.661541 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:39.744327 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:39.747834 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:39.889794 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:40.158452 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:40.243794 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:40.248221 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:40.388784 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:40.656871 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:40.742511 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:40.745659 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:40.890306 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:41.160405 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:41.244068 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:41.247734 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:41.391456 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:41.658267 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:41.748432 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:41.766233 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:41.862796 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:41.898048 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:42.159033 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:42.242678 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:42.243020 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:42.390212 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:42.656308 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:42.742220 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:42.745305 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:42.889195 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:43.168819 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:43.241549 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:43.242371 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:43.389816 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:43.661952 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:43.743730 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:43.744050 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:43.888823 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:44.156827 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:44.241281 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:44.244327 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:44.347542 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:44.389023 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:44.657772 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:44.745203 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:44.746974 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:44.889815 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:45.161226 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:45.244598 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:45.249847 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:45.389642 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:45.661166 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:45.744716 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:45.748208 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:45.891546 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:46.157943 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:46.242408 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:46.242613 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:46.388772 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:46.655777 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:46.743029 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:46.744016 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:46.845924 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:46.888379 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:47.156185 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:47.240576 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:47.241474 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:47.388673 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:47.656341 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:47.748962 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:47.752821 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:47.889281 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:48.157098 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:48.243699 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:48.250429 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:48.388994 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:48.656958 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:48.742557 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:48.743606 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:48.888717 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:49.155486 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:49.241582 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:49.242255 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:49.345697 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:49.388970 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:49.656004 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:49.741384 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:49.742323 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:49.888380 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:50.156088 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:50.241500 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:50.242979 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:50.388521 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:50.656398 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:50.743025 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:50.750395 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:50.889327 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:51.156416 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:51.254771 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:51.259707 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:51.346167 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:51.388814 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:51.657805 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:51.741215 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:51.743794 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:51.888434 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:52.156226 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:52.244465 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:52.244972 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:52.388411 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:52.659125 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:52.770657 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:52.771649 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:52.892810 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:53.156657 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:53.245170 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:53.250173 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:53.348569 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:53.388941 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:53.657678 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:53.748140 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:53.750962 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:53.890711 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:54.157326 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:54.246197 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:54.247667 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:54.389173 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:54.656676 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:54.742031 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:54.744161 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:54.889560 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:55.157241 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:55.249933 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:55.251041 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:55.349372 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:55.388316 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:55.660705 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:55.774711 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:55.776530 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:55.888971 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:56.156842 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:56.242266 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:56.244773 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:56.389431 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:56.657098 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:56.743897 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:56.745232 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:56.888697 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:57.155510 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:57.242839 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:57.243565 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:57.389765 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:57.656855 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:57.741205 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:57.742810 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:57.847388 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:57.888116 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:58.158054 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:58.244831 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:58.245781 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:58.389867 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:58.658079 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:58.745203 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:58.747930 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:58.889414 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:59.157531 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:59.243180 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:59.247283 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:59.388938 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:59.657475 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:59.742467 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:59.744679 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:59.888939 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:00.160616 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:00.252395 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:28:00.253587 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:00.350068 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:00.391766 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:00.656803 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:00.746173 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:28:00.748054 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:00.897731 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:01.155939 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:01.243245 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:28:01.243654 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:01.388817 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:01.656798 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:01.743857 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:01.746821 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:28:01.897646 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:02.159857 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:02.243250 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:28:02.245027 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:02.390013 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:02.656696 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:02.749352 1598980 kapi.go:107] duration metric: took 1m21.513418852s to wait for kubernetes.io/minikube-addons=registry ...
	I0730 02:28:02.749577 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:02.848142 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:02.888811 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:03.156526 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:03.241405 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:03.387949 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:03.662752 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:03.741678 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:03.889969 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:04.157830 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:04.242503 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:04.388493 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:04.656514 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:04.741941 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:04.888889 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:05.156290 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:05.241704 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:05.345461 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:05.389613 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:05.657202 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:05.741369 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:05.888949 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:06.159710 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:06.243196 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:06.389285 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:06.661793 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:06.741873 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:06.890459 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:07.163874 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:07.242189 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:07.346425 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:07.395111 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:07.655843 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:07.741661 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:07.888874 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:08.155693 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:08.241947 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:08.388806 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:08.656422 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:08.741579 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:08.888913 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:09.156854 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:09.242128 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:09.390678 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:09.656367 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:09.745398 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:09.845480 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:09.888843 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:10.156439 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:10.241864 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:10.390095 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:10.657490 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:10.742940 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:10.888251 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:11.157715 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:11.241264 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:11.388836 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:11.662074 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:11.741910 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:11.848216 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:11.889148 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:12.157277 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:12.241358 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:12.392404 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:12.655698 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:12.749165 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:12.888569 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:13.156148 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:13.240954 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:13.388823 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:13.656546 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:13.744404 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:13.889254 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:14.156381 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:14.241015 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:14.345361 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:14.388787 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:14.656225 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:14.741548 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:14.888029 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:15.157140 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:15.242703 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:15.389268 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:15.657173 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:15.744271 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:15.888749 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:16.160140 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:16.246500 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:16.349608 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:16.388401 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:16.656737 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:16.742955 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:16.889254 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:17.156494 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:17.241864 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:17.388780 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:17.655413 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:17.742107 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:17.894320 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:18.159392 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:18.241344 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:18.389267 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:18.655530 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:18.741712 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:18.846034 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:18.888836 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:19.156365 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:19.242264 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:19.388259 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:19.655737 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:19.743461 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:19.889183 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:20.157161 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:20.242092 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:20.392902 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:20.657194 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:20.744592 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:20.890310 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:21.157437 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:21.242078 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:21.345797 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:21.388434 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:21.659262 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:21.741773 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:21.888364 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:22.159038 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:22.241989 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:22.389646 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:22.658201 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:22.741795 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:22.889821 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:23.159110 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:23.247035 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:23.346518 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:23.390483 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:23.657400 1598980 kapi.go:107] duration metric: took 1m42.007032154s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0730 02:28:23.741708 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:23.888557 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:24.241662 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:24.389164 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:24.742178 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:24.888191 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:25.242072 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:25.388918 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:25.740957 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:25.846464 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:25.888798 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:26.241231 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:26.388552 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:26.741616 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:26.888140 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:27.241842 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:27.389637 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:27.741424 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:27.889524 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:28.243390 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:28.345547 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:28.388222 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:28.741166 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:28.888955 1598980 kapi.go:107] duration metric: took 1m43.504167481s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0730 02:28:28.890882 1598980 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-261813 cluster.
	I0730 02:28:28.893194 1598980 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0730 02:28:28.894649 1598980 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0730 02:28:29.241283 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:29.742470 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:30.244543 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:30.349465 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:30.742354 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:31.242964 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:31.742115 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:32.245438 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:32.345435 1598980 pod_ready.go:92] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"True"
	I0730 02:28:32.345498 1598980 pod_ready.go:81] duration metric: took 1m8.506124762s for pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace to be "Ready" ...
	I0730 02:28:32.345525 1598980 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zrzpl" in "kube-system" namespace to be "Ready" ...
	I0730 02:28:32.350238 1598980 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-zrzpl" in "kube-system" namespace has status "Ready":"True"
	I0730 02:28:32.350308 1598980 pod_ready.go:81] duration metric: took 4.760945ms for pod "nvidia-device-plugin-daemonset-zrzpl" in "kube-system" namespace to be "Ready" ...
	I0730 02:28:32.350344 1598980 pod_ready.go:38] duration metric: took 1m10.467552421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 02:28:32.352283 1598980 api_server.go:52] waiting for apiserver process to appear ...
	I0730 02:28:32.352932 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 02:28:32.353029 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 02:28:32.440050 1598980 cri.go:89] found id: "a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:32.440119 1598980 cri.go:89] found id: ""
	I0730 02:28:32.440140 1598980 logs.go:276] 1 containers: [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8]
	I0730 02:28:32.440230 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.446273 1598980 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 02:28:32.446398 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 02:28:32.524497 1598980 cri.go:89] found id: "ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:32.524564 1598980 cri.go:89] found id: ""
	I0730 02:28:32.524585 1598980 logs.go:276] 1 containers: [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3]
	I0730 02:28:32.524692 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.535887 1598980 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 02:28:32.536058 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 02:28:32.617548 1598980 cri.go:89] found id: "7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:32.617611 1598980 cri.go:89] found id: ""
	I0730 02:28:32.617640 1598980 logs.go:276] 1 containers: [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617]
	I0730 02:28:32.617717 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.623355 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 02:28:32.623480 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 02:28:32.699527 1598980 cri.go:89] found id: "108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:32.699589 1598980 cri.go:89] found id: ""
	I0730 02:28:32.699613 1598980 logs.go:276] 1 containers: [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa]
	I0730 02:28:32.699690 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.703390 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 02:28:32.703519 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 02:28:32.741953 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:32.772834 1598980 cri.go:89] found id: "93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:32.772892 1598980 cri.go:89] found id: ""
	I0730 02:28:32.772914 1598980 logs.go:276] 1 containers: [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5]
	I0730 02:28:32.772988 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.776467 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 02:28:32.776580 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 02:28:32.825788 1598980 cri.go:89] found id: "35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:32.825861 1598980 cri.go:89] found id: ""
	I0730 02:28:32.825885 1598980 logs.go:276] 1 containers: [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c]
	I0730 02:28:32.825960 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.830055 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 02:28:32.830176 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 02:28:32.898471 1598980 cri.go:89] found id: "cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:32.898543 1598980 cri.go:89] found id: ""
	I0730 02:28:32.898566 1598980 logs.go:276] 1 containers: [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a]
	I0730 02:28:32.898649 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.903826 1598980 logs.go:123] Gathering logs for container status ...
	I0730 02:28:32.903896 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 02:28:32.974847 1598980 logs.go:123] Gathering logs for CRI-O ...
	I0730 02:28:32.974919 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 02:28:33.104094 1598980 logs.go:123] Gathering logs for dmesg ...
	I0730 02:28:33.104177 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 02:28:33.132615 1598980 logs.go:123] Gathering logs for describe nodes ...
	I0730 02:28:33.132694 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 02:28:33.242663 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:33.401070 1598980 logs.go:123] Gathering logs for kube-apiserver [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8] ...
	I0730 02:28:33.401106 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:33.476893 1598980 logs.go:123] Gathering logs for etcd [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3] ...
	I0730 02:28:33.476972 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:33.528911 1598980 logs.go:123] Gathering logs for coredns [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617] ...
	I0730 02:28:33.528947 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:33.600134 1598980 logs.go:123] Gathering logs for kube-scheduler [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa] ...
	I0730 02:28:33.600224 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:33.657735 1598980 logs.go:123] Gathering logs for kube-proxy [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5] ...
	I0730 02:28:33.657820 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:33.733299 1598980 logs.go:123] Gathering logs for kubelet ...
	I0730 02:28:33.733376 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 02:28:33.743799 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0730 02:28:33.768274 1598980 logs.go:138] Found kubelet problem: Jul 30 02:26:37 addons-261813 kubelet[1543]: E0730 02:26:37.474093    1543 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8bf4d64c-18bf-44e9-8f58-95218dce63f2-kube-api-access-t2wmz podName:8bf4d64c-18bf-44e9-8f58-95218dce63f2 nodeName:}" failed. No retries permitted until 2024-07-30 02:26:37.974064608 +0000 UTC m=+17.468428211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t2wmz" (UniqueName: "kubernetes.io/projected/8bf4d64c-18bf-44e9-8f58-95218dce63f2-kube-api-access-t2wmz") pod "kindnet-2j67p" (UID: "8bf4d64c-18bf-44e9-8f58-95218dce63f2") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-261813" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-261813' and this object, failed to sync configmap cache: timed out waiting for the con
dition]
	W0730 02:28:33.769740 1598980 logs.go:138] Found kubelet problem: Jul 30 02:26:40 addons-261813 kubelet[1543]: W0730 02:26:40.147742    1543 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:33.769997 1598980 logs.go:138] Found kubelet problem: Jul 30 02:26:40 addons-261813 kubelet[1543]: E0730 02:26:40.147796    1543 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:33.803596 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:33.803878 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:33.838753 1598980 logs.go:123] Gathering logs for kindnet [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a] ...
	I0730 02:28:33.838824 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:33.908406 1598980 logs.go:123] Gathering logs for kube-controller-manager [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c] ...
	I0730 02:28:33.908482 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:34.007001 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:34.007060 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0730 02:28:34.007159 1598980 out.go:239] X Problems detected in kubelet:
	W0730 02:28:34.007176 1598980 out.go:239]   Jul 30 02:26:37 addons-261813 kubelet[1543]: E0730 02:26:37.474093    1543 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8bf4d64c-18bf-44e9-8f58-95218dce63f2-kube-api-access-t2wmz podName:8bf4d64c-18bf-44e9-8f58-95218dce63f2 nodeName:}" failed. No retries permitted until 2024-07-30 02:26:37.974064608 +0000 UTC m=+17.468428211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t2wmz" (UniqueName: "kubernetes.io/projected/8bf4d64c-18bf-44e9-8f58-95218dce63f2-kube-api-access-t2wmz") pod "kindnet-2j67p" (UID: "8bf4d64c-18bf-44e9-8f58-95218dce63f2") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-261813" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-261813' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0730 02:28:34.007186 1598980 out.go:239]   Jul 30 02:26:40 addons-261813 kubelet[1543]: W0730 02:26:40.147742    1543 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:34.007194 1598980 out.go:239]   Jul 30 02:26:40 addons-261813 kubelet[1543]: E0730 02:26:40.147796    1543 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:34.007202 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:34.007217 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:34.007229 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:34.007235 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:28:34.242025 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:34.746596 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:35.241901 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:35.742270 1598980 kapi.go:107] duration metric: took 1m54.505417431s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0730 02:28:35.745445 1598980 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0730 02:28:35.747196 1598980 addons.go:510] duration metric: took 2m1.381541103s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns default-storageclass metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0730 02:28:44.008890 1598980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 02:28:44.024139 1598980 api_server.go:72] duration metric: took 2m9.658727483s to wait for apiserver process to appear ...
	I0730 02:28:44.024169 1598980 api_server.go:88] waiting for apiserver healthz status ...
	I0730 02:28:44.024203 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 02:28:44.024267 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 02:28:44.063492 1598980 cri.go:89] found id: "a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:44.063516 1598980 cri.go:89] found id: ""
	I0730 02:28:44.063524 1598980 logs.go:276] 1 containers: [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8]
	I0730 02:28:44.063582 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.067177 1598980 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 02:28:44.067253 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 02:28:44.103992 1598980 cri.go:89] found id: "ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:44.104018 1598980 cri.go:89] found id: ""
	I0730 02:28:44.104027 1598980 logs.go:276] 1 containers: [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3]
	I0730 02:28:44.104081 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.107723 1598980 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 02:28:44.107799 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 02:28:44.148952 1598980 cri.go:89] found id: "7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:44.148974 1598980 cri.go:89] found id: ""
	I0730 02:28:44.148981 1598980 logs.go:276] 1 containers: [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617]
	I0730 02:28:44.149040 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.152506 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 02:28:44.152582 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 02:28:44.193479 1598980 cri.go:89] found id: "108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:44.193504 1598980 cri.go:89] found id: ""
	I0730 02:28:44.193512 1598980 logs.go:276] 1 containers: [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa]
	I0730 02:28:44.193573 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.197405 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 02:28:44.197482 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 02:28:44.235248 1598980 cri.go:89] found id: "93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:44.235272 1598980 cri.go:89] found id: ""
	I0730 02:28:44.235281 1598980 logs.go:276] 1 containers: [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5]
	I0730 02:28:44.235338 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.238957 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 02:28:44.239040 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 02:28:44.278132 1598980 cri.go:89] found id: "35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:44.278156 1598980 cri.go:89] found id: ""
	I0730 02:28:44.278165 1598980 logs.go:276] 1 containers: [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c]
	I0730 02:28:44.278263 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.281919 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 02:28:44.281999 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 02:28:44.326100 1598980 cri.go:89] found id: "cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:44.326123 1598980 cri.go:89] found id: ""
	I0730 02:28:44.326132 1598980 logs.go:276] 1 containers: [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a]
	I0730 02:28:44.326188 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.329752 1598980 logs.go:123] Gathering logs for kube-proxy [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5] ...
	I0730 02:28:44.329778 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:44.368816 1598980 logs.go:123] Gathering logs for kindnet [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a] ...
	I0730 02:28:44.368845 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:44.421705 1598980 logs.go:123] Gathering logs for container status ...
	I0730 02:28:44.421735 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 02:28:44.474570 1598980 logs.go:123] Gathering logs for dmesg ...
	I0730 02:28:44.474603 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 02:28:44.493820 1598980 logs.go:123] Gathering logs for describe nodes ...
	I0730 02:28:44.493862 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 02:28:44.624578 1598980 logs.go:123] Gathering logs for kube-apiserver [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8] ...
	I0730 02:28:44.624610 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:44.679610 1598980 logs.go:123] Gathering logs for etcd [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3] ...
	I0730 02:28:44.679645 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:44.729466 1598980 logs.go:123] Gathering logs for coredns [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617] ...
	I0730 02:28:44.729503 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:44.770825 1598980 logs.go:123] Gathering logs for kubelet ...
	I0730 02:28:44.770862 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0730 02:28:44.792188 1598980 logs.go:138] Found kubelet problem: Jul 30 02:26:40 addons-261813 kubelet[1543]: W0730 02:26:40.147742    1543 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:44.792446 1598980 logs.go:138] Found kubelet problem: Jul 30 02:26:40 addons-261813 kubelet[1543]: E0730 02:26:40.147796    1543 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:44.820351 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:44.820574 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:44.857092 1598980 logs.go:123] Gathering logs for kube-scheduler [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa] ...
	I0730 02:28:44.857138 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:44.900209 1598980 logs.go:123] Gathering logs for kube-controller-manager [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c] ...
	I0730 02:28:44.900243 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:44.970007 1598980 logs.go:123] Gathering logs for CRI-O ...
	I0730 02:28:44.970044 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 02:28:45.078748 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:45.078839 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0730 02:28:45.078951 1598980 out.go:239] X Problems detected in kubelet:
	W0730 02:28:45.079246 1598980 out.go:239]   Jul 30 02:26:40 addons-261813 kubelet[1543]: W0730 02:26:40.147742    1543 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:45.079316 1598980 out.go:239]   Jul 30 02:26:40 addons-261813 kubelet[1543]: E0730 02:26:40.147796    1543 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:45.079359 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:45.079428 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:45.079447 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:45.079486 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:28:55.081169 1598980 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:28:55.091451 1598980 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0730 02:28:55.092545 1598980 api_server.go:141] control plane version: v1.30.3
	I0730 02:28:55.092574 1598980 api_server.go:131] duration metric: took 11.068397148s to wait for apiserver health ...
	I0730 02:28:55.092583 1598980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 02:28:55.092604 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 02:28:55.092664 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 02:28:55.137090 1598980 cri.go:89] found id: "a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:55.137112 1598980 cri.go:89] found id: ""
	I0730 02:28:55.137120 1598980 logs.go:276] 1 containers: [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8]
	I0730 02:28:55.137180 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.140857 1598980 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 02:28:55.140929 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 02:28:55.184095 1598980 cri.go:89] found id: "ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:55.184119 1598980 cri.go:89] found id: ""
	I0730 02:28:55.184128 1598980 logs.go:276] 1 containers: [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3]
	I0730 02:28:55.184188 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.187573 1598980 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 02:28:55.187639 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 02:28:55.228853 1598980 cri.go:89] found id: "7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:55.228876 1598980 cri.go:89] found id: ""
	I0730 02:28:55.228883 1598980 logs.go:276] 1 containers: [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617]
	I0730 02:28:55.228937 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.232936 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 02:28:55.233007 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 02:28:55.290768 1598980 cri.go:89] found id: "108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:55.290792 1598980 cri.go:89] found id: ""
	I0730 02:28:55.290800 1598980 logs.go:276] 1 containers: [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa]
	I0730 02:28:55.290857 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.294565 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 02:28:55.294672 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 02:28:55.339009 1598980 cri.go:89] found id: "93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:55.339090 1598980 cri.go:89] found id: ""
	I0730 02:28:55.339113 1598980 logs.go:276] 1 containers: [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5]
	I0730 02:28:55.339185 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.342795 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 02:28:55.342901 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 02:28:55.381815 1598980 cri.go:89] found id: "35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:55.381835 1598980 cri.go:89] found id: ""
	I0730 02:28:55.381843 1598980 logs.go:276] 1 containers: [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c]
	I0730 02:28:55.381917 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.385651 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 02:28:55.385737 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 02:28:55.429393 1598980 cri.go:89] found id: "cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:55.429466 1598980 cri.go:89] found id: ""
	I0730 02:28:55.429487 1598980 logs.go:276] 1 containers: [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a]
	I0730 02:28:55.429577 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.433910 1598980 logs.go:123] Gathering logs for coredns [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617] ...
	I0730 02:28:55.433943 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:55.481178 1598980 logs.go:123] Gathering logs for kube-scheduler [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa] ...
	I0730 02:28:55.481208 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:55.528018 1598980 logs.go:123] Gathering logs for CRI-O ...
	I0730 02:28:55.528050 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 02:28:55.628171 1598980 logs.go:123] Gathering logs for container status ...
	I0730 02:28:55.628210 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 02:28:55.676340 1598980 logs.go:123] Gathering logs for kube-apiserver [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8] ...
	I0730 02:28:55.676385 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:55.750498 1598980 logs.go:123] Gathering logs for dmesg ...
	I0730 02:28:55.750530 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 02:28:55.769769 1598980 logs.go:123] Gathering logs for describe nodes ...
	I0730 02:28:55.769852 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 02:28:55.904026 1598980 logs.go:123] Gathering logs for etcd [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3] ...
	I0730 02:28:55.904056 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:55.964544 1598980 logs.go:123] Gathering logs for kube-proxy [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5] ...
	I0730 02:28:55.964650 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:56.000485 1598980 logs.go:123] Gathering logs for kube-controller-manager [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c] ...
	I0730 02:28:56.000523 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:56.074408 1598980 logs.go:123] Gathering logs for kindnet [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a] ...
	I0730 02:28:56.074447 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:56.126605 1598980 logs.go:123] Gathering logs for kubelet ...
	I0730 02:28:56.126638 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0730 02:28:56.173577 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:56.173837 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:56.210264 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:56.210295 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0730 02:28:56.210354 1598980 out.go:239] X Problems detected in kubelet:
	W0730 02:28:56.210367 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:56.210375 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:56.210389 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:56.210396 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:29:06.220639 1598980 system_pods.go:59] 18 kube-system pods found
	I0730 02:29:06.220696 1598980 system_pods.go:61] "coredns-7db6d8ff4d-l22tb" [a21b727b-d2cf-4251-8850-9f55d4483afa] Running
	I0730 02:29:06.220710 1598980 system_pods.go:61] "csi-hostpath-attacher-0" [0c79c5cf-ac99-42f9-be4b-3d1d454d90f5] Running
	I0730 02:29:06.220715 1598980 system_pods.go:61] "csi-hostpath-resizer-0" [9e216baa-8fce-4cbd-b955-95512c092fe4] Running
	I0730 02:29:06.220720 1598980 system_pods.go:61] "csi-hostpathplugin-d8vp2" [38eae3e8-34a7-49ac-94d8-1c7fe18609b6] Running
	I0730 02:29:06.220725 1598980 system_pods.go:61] "etcd-addons-261813" [0a75d41c-1d52-41ee-b68b-4032433f51e7] Running
	I0730 02:29:06.220730 1598980 system_pods.go:61] "kindnet-2j67p" [8bf4d64c-18bf-44e9-8f58-95218dce63f2] Running
	I0730 02:29:06.220735 1598980 system_pods.go:61] "kube-apiserver-addons-261813" [c9db107c-71f7-45c7-864d-1c7f1cc5f826] Running
	I0730 02:29:06.220739 1598980 system_pods.go:61] "kube-controller-manager-addons-261813" [293a3bf9-5b9f-47f5-b518-a1e2374f11f1] Running
	I0730 02:29:06.220744 1598980 system_pods.go:61] "kube-ingress-dns-minikube" [0cca283f-f80d-4219-a735-ce5eb75135f4] Running
	I0730 02:29:06.220748 1598980 system_pods.go:61] "kube-proxy-s88xb" [9ef700dc-4b56-4fbd-82bf-b9e75360235b] Running
	I0730 02:29:06.220752 1598980 system_pods.go:61] "kube-scheduler-addons-261813" [22c54723-d213-4ea3-b23c-45042048293e] Running
	I0730 02:29:06.220759 1598980 system_pods.go:61] "metrics-server-c59844bb4-8rfsg" [fac509e3-535c-40c1-ad6c-61226795aa5e] Running
	I0730 02:29:06.220763 1598980 system_pods.go:61] "nvidia-device-plugin-daemonset-zrzpl" [73510050-2ea7-49cd-bf93-d1b56047d84f] Running
	I0730 02:29:06.220767 1598980 system_pods.go:61] "registry-698f998955-hmxpq" [831bfd95-6ae5-4eae-883c-71619d8c8922] Running
	I0730 02:29:06.220770 1598980 system_pods.go:61] "registry-proxy-c2b4j" [8c7f77e7-adf9-4a3f-8a9a-0e7e917e1a2f] Running
	I0730 02:29:06.220775 1598980 system_pods.go:61] "snapshot-controller-745499f584-47q8z" [370da19b-339e-4fdf-a88d-dced8fc43691] Running
	I0730 02:29:06.220779 1598980 system_pods.go:61] "snapshot-controller-745499f584-lxhf2" [6796fbfb-03e1-4e7b-ac36-f89275c1dc6c] Running
	I0730 02:29:06.220787 1598980 system_pods.go:61] "storage-provisioner" [f5c1a2d3-2530-4dbb-843a-cce7e3bc6767] Running
	I0730 02:29:06.220794 1598980 system_pods.go:74] duration metric: took 11.128204865s to wait for pod list to return data ...
	I0730 02:29:06.220807 1598980 default_sa.go:34] waiting for default service account to be created ...
	I0730 02:29:06.223327 1598980 default_sa.go:45] found service account: "default"
	I0730 02:29:06.223356 1598980 default_sa.go:55] duration metric: took 2.54213ms for default service account to be created ...
	I0730 02:29:06.223380 1598980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0730 02:29:06.232601 1598980 system_pods.go:86] 18 kube-system pods found
	I0730 02:29:06.232637 1598980 system_pods.go:89] "coredns-7db6d8ff4d-l22tb" [a21b727b-d2cf-4251-8850-9f55d4483afa] Running
	I0730 02:29:06.232644 1598980 system_pods.go:89] "csi-hostpath-attacher-0" [0c79c5cf-ac99-42f9-be4b-3d1d454d90f5] Running
	I0730 02:29:06.232649 1598980 system_pods.go:89] "csi-hostpath-resizer-0" [9e216baa-8fce-4cbd-b955-95512c092fe4] Running
	I0730 02:29:06.232653 1598980 system_pods.go:89] "csi-hostpathplugin-d8vp2" [38eae3e8-34a7-49ac-94d8-1c7fe18609b6] Running
	I0730 02:29:06.232683 1598980 system_pods.go:89] "etcd-addons-261813" [0a75d41c-1d52-41ee-b68b-4032433f51e7] Running
	I0730 02:29:06.232689 1598980 system_pods.go:89] "kindnet-2j67p" [8bf4d64c-18bf-44e9-8f58-95218dce63f2] Running
	I0730 02:29:06.232693 1598980 system_pods.go:89] "kube-apiserver-addons-261813" [c9db107c-71f7-45c7-864d-1c7f1cc5f826] Running
	I0730 02:29:06.232708 1598980 system_pods.go:89] "kube-controller-manager-addons-261813" [293a3bf9-5b9f-47f5-b518-a1e2374f11f1] Running
	I0730 02:29:06.232714 1598980 system_pods.go:89] "kube-ingress-dns-minikube" [0cca283f-f80d-4219-a735-ce5eb75135f4] Running
	I0730 02:29:06.232721 1598980 system_pods.go:89] "kube-proxy-s88xb" [9ef700dc-4b56-4fbd-82bf-b9e75360235b] Running
	I0730 02:29:06.232725 1598980 system_pods.go:89] "kube-scheduler-addons-261813" [22c54723-d213-4ea3-b23c-45042048293e] Running
	I0730 02:29:06.232733 1598980 system_pods.go:89] "metrics-server-c59844bb4-8rfsg" [fac509e3-535c-40c1-ad6c-61226795aa5e] Running
	I0730 02:29:06.232737 1598980 system_pods.go:89] "nvidia-device-plugin-daemonset-zrzpl" [73510050-2ea7-49cd-bf93-d1b56047d84f] Running
	I0730 02:29:06.232764 1598980 system_pods.go:89] "registry-698f998955-hmxpq" [831bfd95-6ae5-4eae-883c-71619d8c8922] Running
	I0730 02:29:06.232775 1598980 system_pods.go:89] "registry-proxy-c2b4j" [8c7f77e7-adf9-4a3f-8a9a-0e7e917e1a2f] Running
	I0730 02:29:06.232779 1598980 system_pods.go:89] "snapshot-controller-745499f584-47q8z" [370da19b-339e-4fdf-a88d-dced8fc43691] Running
	I0730 02:29:06.232784 1598980 system_pods.go:89] "snapshot-controller-745499f584-lxhf2" [6796fbfb-03e1-4e7b-ac36-f89275c1dc6c] Running
	I0730 02:29:06.232791 1598980 system_pods.go:89] "storage-provisioner" [f5c1a2d3-2530-4dbb-843a-cce7e3bc6767] Running
	I0730 02:29:06.232798 1598980 system_pods.go:126] duration metric: took 9.411713ms to wait for k8s-apps to be running ...
	I0730 02:29:06.232811 1598980 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 02:29:06.232880 1598980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 02:29:06.244932 1598980 system_svc.go:56] duration metric: took 12.105717ms WaitForService to wait for kubelet
	I0730 02:29:06.244963 1598980 kubeadm.go:582] duration metric: took 2m31.879557355s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 02:29:06.244984 1598980 node_conditions.go:102] verifying NodePressure condition ...
	I0730 02:29:06.248749 1598980 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0730 02:29:06.248784 1598980 node_conditions.go:123] node cpu capacity is 2
	I0730 02:29:06.248797 1598980 node_conditions.go:105] duration metric: took 3.807112ms to run NodePressure ...
	I0730 02:29:06.248811 1598980 start.go:241] waiting for startup goroutines ...
	I0730 02:29:06.248819 1598980 start.go:246] waiting for cluster config update ...
	I0730 02:29:06.248845 1598980 start.go:255] writing updated cluster config ...
	I0730 02:29:06.249142 1598980 ssh_runner.go:195] Run: rm -f paused
	I0730 02:29:06.603247 1598980 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0730 02:29:06.605926 1598980 out.go:177] * Done! kubectl is now configured to use "addons-261813" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 30 02:33:26 addons-261813 crio[961]: time="2024-07-30 02:33:26.575568437Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bf771276-5e98-437d-bb30-3222e11faf86 name=/runtime.v1.ImageService/ImageStatus
	Jul 30 02:33:26 addons-261813 crio[961]: time="2024-07-30 02:33:26.576843026Z" level=info msg="Creating container: default/hello-world-app-6778b5fc9f-sfmbr/hello-world-app" id=6fda0555-00f7-4968-ae13-6ae2100c7dfd name=/runtime.v1.RuntimeService/CreateContainer
	Jul 30 02:33:26 addons-261813 crio[961]: time="2024-07-30 02:33:26.576943799Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 30 02:33:26 addons-261813 crio[961]: time="2024-07-30 02:33:26.595591136Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/135915fb9afc91b1b760ebf8f308b89d95dc34aaa029aa9d029ca44eef54d8ac/merged/etc/passwd: no such file or directory"
	Jul 30 02:33:26 addons-261813 crio[961]: time="2024-07-30 02:33:26.595632882Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/135915fb9afc91b1b760ebf8f308b89d95dc34aaa029aa9d029ca44eef54d8ac/merged/etc/group: no such file or directory"
	Jul 30 02:33:26 addons-261813 crio[961]: time="2024-07-30 02:33:26.638184095Z" level=info msg="Created container 8609bed30522f3fbfe407fa20635e4a66943baee5eab428b73375f0ad48cef26: default/hello-world-app-6778b5fc9f-sfmbr/hello-world-app" id=6fda0555-00f7-4968-ae13-6ae2100c7dfd name=/runtime.v1.RuntimeService/CreateContainer
	Jul 30 02:33:26 addons-261813 crio[961]: time="2024-07-30 02:33:26.639024716Z" level=info msg="Starting container: 8609bed30522f3fbfe407fa20635e4a66943baee5eab428b73375f0ad48cef26" id=99cc66d0-5990-4e91-ace2-508f680d3924 name=/runtime.v1.RuntimeService/StartContainer
	Jul 30 02:33:26 addons-261813 crio[961]: time="2024-07-30 02:33:26.649764209Z" level=info msg="Started container" PID=8620 containerID=8609bed30522f3fbfe407fa20635e4a66943baee5eab428b73375f0ad48cef26 description=default/hello-world-app-6778b5fc9f-sfmbr/hello-world-app id=99cc66d0-5990-4e91-ace2-508f680d3924 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b7d893e59741580f89bdf4fcc59514af7b1253e20d8d36048de822bd5deb6d36
	Jul 30 02:33:26 addons-261813 crio[961]: time="2024-07-30 02:33:26.836036621Z" level=info msg="Removing container: de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3" id=47de1e5d-5acc-4347-91a1-73c619615fdd name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 30 02:33:26 addons-261813 crio[961]: time="2024-07-30 02:33:26.859621045Z" level=info msg="Removed container de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=47de1e5d-5acc-4347-91a1-73c619615fdd name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 30 02:33:28 addons-261813 crio[961]: time="2024-07-30 02:33:28.541383953Z" level=info msg="Stopping container: a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963 (timeout: 2s)" id=6a644d95-fa9e-4cdd-8614-84ecb81883fe name=/runtime.v1.RuntimeService/StopContainer
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.547543214Z" level=warning msg="Stopping container a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=6a644d95-fa9e-4cdd-8614-84ecb81883fe name=/runtime.v1.RuntimeService/StopContainer
	Jul 30 02:33:30 addons-261813 conmon[5322]: conmon a603f3d0c907e8f42975 <ninfo>: container 5333 exited with status 137
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.686669448Z" level=info msg="Stopped container a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963: ingress-nginx/ingress-nginx-controller-6d9bd977d4-qtc5f/controller" id=6a644d95-fa9e-4cdd-8614-84ecb81883fe name=/runtime.v1.RuntimeService/StopContainer
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.687417173Z" level=info msg="Stopping pod sandbox: 9d426a5dad7edfdea04270d6cb67fe41469aceceb252b60ba20cb6100d9ab71f" id=3e0e0372-5749-40ac-9bed-969fd6c1743f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.690812223Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-HCEQXC64I3PRJMBE - [0:0]\n:KUBE-HP-NKFLLQLXO7OQ5ZX2 - [0:0]\n-X KUBE-HP-HCEQXC64I3PRJMBE\n-X KUBE-HP-NKFLLQLXO7OQ5ZX2\nCOMMIT\n"
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.692162043Z" level=info msg="Closing host port tcp:80"
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.692208762Z" level=info msg="Closing host port tcp:443"
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.693563595Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.693592304Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.693748617Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-6d9bd977d4-qtc5f Namespace:ingress-nginx ID:9d426a5dad7edfdea04270d6cb67fe41469aceceb252b60ba20cb6100d9ab71f UID:1c03fe45-826c-4247-8a5a-13cc26a231ad NetNS:/var/run/netns/f221166f-348a-40de-805d-5166d58e6042 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.693881438Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-6d9bd977d4-qtc5f from CNI network \"kindnet\" (type=ptp)"
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.720253903Z" level=info msg="Stopped pod sandbox: 9d426a5dad7edfdea04270d6cb67fe41469aceceb252b60ba20cb6100d9ab71f" id=3e0e0372-5749-40ac-9bed-969fd6c1743f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.848942905Z" level=info msg="Removing container: a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963" id=3066c7c9-e847-415e-a240-5f153cbfe784 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 30 02:33:30 addons-261813 crio[961]: time="2024-07-30 02:33:30.865154924Z" level=info msg="Removed container a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963: ingress-nginx/ingress-nginx-controller-6d9bd977d4-qtc5f/controller" id=3066c7c9-e847-415e-a240-5f153cbfe784 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8609bed30522f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   b7d893e597415       hello-world-app-6778b5fc9f-sfmbr
	d6401ab1fa013       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   371517bc5e27c       nginx
	79fac74c217dc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago       Running             busybox                   0                   a3bde3f0b15a1       busybox
	07fb52f251a87       296b5f799fcd8a39f0e93373bc18787d846c6a2a78a5657b1514831f043c09bf                                                             5 minutes ago       Exited              patch                     2                   f36e8e0dd726d       ingress-nginx-admission-patch-pdbrb
	018f5cf235431       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              create                    0                   e126db030f8a1       ingress-nginx-admission-create-hwwbc
	78ca04eb9146a       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        6 minutes ago       Running             metrics-server            0                   42b8f4cfa0017       metrics-server-c59844bb4-8rfsg
	a3dea84fe5c9b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago       Running             storage-provisioner       0                   88c39fa123ada       storage-provisioner
	7e56240fee5c6       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             6 minutes ago       Running             coredns                   0                   472ebf9e1b08d       coredns-7db6d8ff4d-l22tb
	cefca930d8e8f       docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a                           6 minutes ago       Running             kindnet-cni               0                   b873491f998b7       kindnet-2j67p
	93685ccfcfb0c       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                             6 minutes ago       Running             kube-proxy                0                   201ce970d35bd       kube-proxy-s88xb
	ff022a285ff31       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             7 minutes ago       Running             etcd                      0                   d09f676f7ec75       etcd-addons-261813
	35b9e367f7359       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                             7 minutes ago       Running             kube-controller-manager   0                   fd25fe34235e3       kube-controller-manager-addons-261813
	a4673af6f12b1       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                             7 minutes ago       Running             kube-apiserver            0                   147d1b763de49       kube-apiserver-addons-261813
	108e2658a310b       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                             7 minutes ago       Running             kube-scheduler            0                   a7e1e98c29e90       kube-scheduler-addons-261813
	
	
	==> coredns [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617] <==
	[INFO] 10.244.0.9:56813 - 57176 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002513856s
	[INFO] 10.244.0.9:47865 - 31297 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000701604s
	[INFO] 10.244.0.9:47865 - 31047 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00121576s
	[INFO] 10.244.0.9:37242 - 10809 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000091025s
	[INFO] 10.244.0.9:37242 - 17724 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000049607s
	[INFO] 10.244.0.9:37708 - 60325 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056228s
	[INFO] 10.244.0.9:37708 - 60071 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000144735s
	[INFO] 10.244.0.9:34626 - 44421 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042854s
	[INFO] 10.244.0.9:34626 - 52103 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.001161878s
	[INFO] 10.244.0.9:48047 - 7756 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001391124s
	[INFO] 10.244.0.9:48047 - 5962 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001429351s
	[INFO] 10.244.0.9:54690 - 61475 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071867s
	[INFO] 10.244.0.9:54690 - 47397 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000062071s
	[INFO] 10.244.0.20:43392 - 36231 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000196122s
	[INFO] 10.244.0.20:45888 - 17029 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166101s
	[INFO] 10.244.0.20:44772 - 63998 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017442s
	[INFO] 10.244.0.20:41258 - 1936 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000359377s
	[INFO] 10.244.0.20:50691 - 2308 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141691s
	[INFO] 10.244.0.20:56280 - 37669 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063186s
	[INFO] 10.244.0.20:38851 - 52577 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005769299s
	[INFO] 10.244.0.20:45170 - 63000 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.006575682s
	[INFO] 10.244.0.20:50011 - 31842 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000664707s
	[INFO] 10.244.0.20:46737 - 14536 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000943298s
	[INFO] 10.244.0.22:46027 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000201571s
	[INFO] 10.244.0.22:45175 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000188279s
	
	
	==> describe nodes <==
	Name:               addons-261813
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-261813
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9
	                    minikube.k8s.io/name=addons-261813
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T02_26_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-261813
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 02:26:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-261813
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 02:33:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 02:31:27 +0000   Tue, 30 Jul 2024 02:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 02:31:27 +0000   Tue, 30 Jul 2024 02:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 02:31:27 +0000   Tue, 30 Jul 2024 02:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 02:31:27 +0000   Tue, 30 Jul 2024 02:27:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-261813
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf74892e2c29481d80de2829e05c4450
	  System UUID:                6ca6ef4f-b25d-4926-a207-98f143624187
	  Boot ID:                    f43244bd-8d62-45f7-a4e7-2b350386049a
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  default                     hello-world-app-6778b5fc9f-sfmbr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-7db6d8ff4d-l22tb                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m59s
	  kube-system                 etcd-addons-261813                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m15s
	  kube-system                 kindnet-2j67p                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m
	  kube-system                 kube-apiserver-addons-261813             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-controller-manager-addons-261813    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-proxy-s88xb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-scheduler-addons-261813             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 metrics-server-c59844bb4-8rfsg           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m56s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m22s (x8 over 7m22s)  kubelet          Node addons-261813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m22s (x8 over 7m22s)  kubelet          Node addons-261813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m22s (x8 over 7m22s)  kubelet          Node addons-261813 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m15s (x2 over 7m15s)  kubelet          Node addons-261813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s (x2 over 7m15s)  kubelet          Node addons-261813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m15s (x2 over 7m15s)  kubelet          Node addons-261813 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m2s                   node-controller  Node addons-261813 event: Registered Node addons-261813 in Controller
	  Normal  NodeReady                6m14s                  kubelet          Node addons-261813 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001068] FS-Cache: O-key=[8] 'a8eec90000000000'
	[  +0.000737] FS-Cache: N-cookie c=00000119 [p=00000110 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=0000000003cd159e
	[  +0.001078] FS-Cache: N-key=[8] 'a8eec90000000000'
	[  +0.002681] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=00000113 [p=00000110 fl=226 nc=0 na=1]
	[  +0.001024] FS-Cache: O-cookie d=0000000075a7dcb2{9p.inode} n=0000000055616a94
	[  +0.001148] FS-Cache: O-key=[8] 'a8eec90000000000'
	[  +0.000710] FS-Cache: N-cookie c=0000011a [p=00000110 fl=2 nc=0 na=1]
	[  +0.000991] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=00000000d881dbab
	[  +0.001096] FS-Cache: N-key=[8] 'a8eec90000000000'
	[  +2.751220] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=00000111 [p=00000110 fl=226 nc=0 na=1]
	[  +0.001122] FS-Cache: O-cookie d=0000000075a7dcb2{9p.inode} n=00000000387198b3
	[  +0.001118] FS-Cache: O-key=[8] 'a7eec90000000000'
	[  +0.000754] FS-Cache: N-cookie c=0000011c [p=00000110 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=000000007f70e66c
	[  +0.001203] FS-Cache: N-key=[8] 'a7eec90000000000'
	[  +0.349440] FS-Cache: Duplicate cookie detected
	[  +0.000730] FS-Cache: O-cookie c=00000116 [p=00000110 fl=226 nc=0 na=1]
	[  +0.000965] FS-Cache: O-cookie d=0000000075a7dcb2{9p.inode} n=00000000e8c4e10f
	[  +0.001078] FS-Cache: O-key=[8] 'afeec90000000000'
	[  +0.000701] FS-Cache: N-cookie c=0000011d [p=00000110 fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=0000000003cd159e
	[  +0.001050] FS-Cache: N-key=[8] 'afeec90000000000'
	
	
	==> etcd [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3] <==
	{"level":"info","ts":"2024-07-30T02:26:14.412255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-07-30T02:26:14.412296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-30T02:26:14.412349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-30T02:26:14.412385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-30T02:26:14.416108Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T02:26:14.417576Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-261813 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-30T02:26:14.417651Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T02:26:14.420144Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T02:26:14.420299Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T02:26:14.417748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T02:26:14.417843Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-30T02:26:14.42041Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-30T02:26:14.42047Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T02:26:14.421802Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-30T02:26:14.423072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-30T02:26:35.85306Z","caller":"traceutil/trace.go:171","msg":"trace[1156709301] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"125.826625ms","start":"2024-07-30T02:26:35.727215Z","end":"2024-07-30T02:26:35.853041Z","steps":["trace[1156709301] 'process raft request'  (duration: 125.752535ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T02:26:35.855043Z","caller":"traceutil/trace.go:171","msg":"trace[645754919] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"130.705312ms","start":"2024-07-30T02:26:35.724321Z","end":"2024-07-30T02:26:35.855026Z","steps":["trace[645754919] 'process raft request'  (duration: 126.789058ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T02:26:35.905235Z","caller":"traceutil/trace.go:171","msg":"trace[1032819210] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"135.885652ms","start":"2024-07-30T02:26:35.769333Z","end":"2024-07-30T02:26:35.905219Z","steps":["trace[1032819210] 'process raft request'  (duration: 135.855114ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T02:26:35.925931Z","caller":"traceutil/trace.go:171","msg":"trace[1385812532] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"149.062436ms","start":"2024-07-30T02:26:35.776849Z","end":"2024-07-30T02:26:35.925911Z","steps":["trace[1385812532] 'process raft request'  (duration: 128.221691ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T02:26:35.926318Z","caller":"traceutil/trace.go:171","msg":"trace[577518410] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"149.388583ms","start":"2024-07-30T02:26:35.77692Z","end":"2024-07-30T02:26:35.926309Z","steps":["trace[577518410] 'process raft request'  (duration: 128.229198ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T02:26:37.845831Z","caller":"traceutil/trace.go:171","msg":"trace[486773526] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"108.227146ms","start":"2024-07-30T02:26:37.737582Z","end":"2024-07-30T02:26:37.845809Z","steps":["trace[486773526] 'process raft request'  (duration: 63.44517ms)","trace[486773526] 'compare'  (duration: 44.046395ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T02:26:37.918144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.773504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-30T02:26:37.923568Z","caller":"traceutil/trace.go:171","msg":"trace[892392990] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:392; }","duration":"133.195332ms","start":"2024-07-30T02:26:37.79035Z","end":"2024-07-30T02:26:37.923545Z","steps":["trace[892392990] 'agreement among raft nodes before linearized reading'  (duration: 127.756995ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T02:26:38.854328Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.139165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-30T02:26:38.854455Z","caller":"traceutil/trace.go:171","msg":"trace[2016433027] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:472; }","duration":"102.270887ms","start":"2024-07-30T02:26:38.75217Z","end":"2024-07-30T02:26:38.854441Z","steps":["trace[2016433027] 'agreement among raft nodes before linearized reading'  (duration: 102.124495ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:33:35 up 1 day, 16 min,  0 users,  load average: 0.44, 1.13, 1.80
	Linux addons-261813 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a] <==
	I0730 02:32:31.406906       1 main.go:299] handling current node
	W0730 02:32:33.221655       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0730 02:32:33.221688       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0730 02:32:38.532664       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0730 02:32:38.532712       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0730 02:32:41.406473       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:32:41.406507       1 main.go:299] handling current node
	W0730 02:32:47.017602       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 02:32:47.017636       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0730 02:32:51.412605       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:32:51.412740       1 main.go:299] handling current node
	I0730 02:33:01.407093       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:33:01.407217       1 main.go:299] handling current node
	I0730 02:33:11.407033       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:33:11.407064       1 main.go:299] handling current node
	W0730 02:33:15.025623       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0730 02:33:15.026077       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0730 02:33:21.406652       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:33:21.406688       1 main.go:299] handling current node
	W0730 02:33:26.282748       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0730 02:33:26.283083       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0730 02:33:28.467673       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 02:33:28.467800       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0730 02:33:31.406872       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:33:31.406911       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8] <==
	E0730 02:29:15.581851       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49278: use of closed network connection
	E0730 02:29:15.724157       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49292: use of closed network connection
	I0730 02:29:54.649120       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0730 02:30:23.275102       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0730 02:30:28.970484       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 02:30:28.970637       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 02:30:28.992845       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 02:30:28.996155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 02:30:29.098197       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 02:30:29.099206       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 02:30:29.113118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 02:30:29.113241       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0730 02:30:30.067174       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0730 02:30:30.113879       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0730 02:30:30.137025       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0730 02:30:35.716228       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.24.83"}
	I0730 02:30:59.010875       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0730 02:31:00.082520       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0730 02:31:04.569919       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0730 02:31:04.855393       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.179.112"}
	I0730 02:33:25.324188       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.109.27"}
	E0730 02:33:26.850521       1 watch.go:250] http2: stream closed
	E0730 02:33:27.582398       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0730 02:33:30.277901       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0730 02:33:30.287697       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c] <==
	W0730 02:32:25.206044       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:32:25.206166       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:32:38.440618       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:32:38.440736       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:32:52.561032       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:32:52.561074       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:32:54.378973       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:32:54.379011       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:32:57.812037       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:32:57.812072       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0730 02:33:25.131171       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="42.035105ms"
	I0730 02:33:25.139909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="8.679224ms"
	I0730 02:33:25.140265       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="41.706µs"
	I0730 02:33:25.140879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="44.782µs"
	I0730 02:33:26.896218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="25.50745ms"
	I0730 02:33:26.896373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="38.489µs"
	W0730 02:33:27.466302       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:33:27.466417       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0730 02:33:27.502683       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0730 02:33:27.513547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="8.237µs"
	I0730 02:33:27.524881       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0730 02:33:29.660970       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:33:29.661008       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:33:32.667349       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:33:32.667384       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5] <==
	I0730 02:26:40.272098       1 server_linux.go:69] "Using iptables proxy"
	I0730 02:26:40.637759       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0730 02:26:40.767191       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0730 02:26:40.767307       1 server_linux.go:165] "Using iptables Proxier"
	I0730 02:26:40.783741       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0730 02:26:40.783846       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0730 02:26:40.783893       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 02:26:40.786842       1 server.go:872] "Version info" version="v1.30.3"
	I0730 02:26:40.787638       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 02:26:40.804133       1 config.go:192] "Starting service config controller"
	I0730 02:26:40.811593       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 02:26:40.807974       1 config.go:101] "Starting endpoint slice config controller"
	I0730 02:26:40.814364       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 02:26:40.808927       1 config.go:319] "Starting node config controller"
	I0730 02:26:40.814473       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 02:26:40.922295       1 shared_informer.go:320] Caches are synced for node config
	I0730 02:26:40.922401       1 shared_informer.go:320] Caches are synced for service config
	I0730 02:26:40.922427       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa] <==
	W0730 02:26:18.466595       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 02:26:18.466644       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 02:26:18.466822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 02:26:18.467149       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 02:26:18.466890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 02:26:18.467247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0730 02:26:18.466948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 02:26:18.467313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0730 02:26:18.467018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0730 02:26:18.467378       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0730 02:26:18.467064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 02:26:18.467456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0730 02:26:18.467110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0730 02:26:18.467520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0730 02:26:18.467704       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0730 02:26:18.467884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0730 02:26:18.467805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 02:26:18.468191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 02:26:18.467854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 02:26:18.468293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 02:26:18.468100       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0730 02:26:18.468420       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0730 02:26:19.409612       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0730 02:26:19.409742       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0730 02:26:21.146996       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 30 02:33:25 addons-261813 kubelet[1543]: I0730 02:33:25.117779    1543 memory_manager.go:354] "RemoveStaleState removing state" podUID="12364633-2bd5-41bf-a673-8ffc6fc19012" containerName="gadget"
	Jul 30 02:33:25 addons-261813 kubelet[1543]: I0730 02:33:25.117788    1543 memory_manager.go:354] "RemoveStaleState removing state" podUID="12364633-2bd5-41bf-a673-8ffc6fc19012" containerName="gadget"
	Jul 30 02:33:25 addons-261813 kubelet[1543]: I0730 02:33:25.176424    1543 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdwb4\" (UniqueName: \"kubernetes.io/projected/e7daa879-eea5-4cd2-8c99-895604a40cf6-kube-api-access-mdwb4\") pod \"hello-world-app-6778b5fc9f-sfmbr\" (UID: \"e7daa879-eea5-4cd2-8c99-895604a40cf6\") " pod="default/hello-world-app-6778b5fc9f-sfmbr"
	Jul 30 02:33:26 addons-261813 kubelet[1543]: I0730 02:33:26.383625    1543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mphxj\" (UniqueName: \"kubernetes.io/projected/0cca283f-f80d-4219-a735-ce5eb75135f4-kube-api-access-mphxj\") pod \"0cca283f-f80d-4219-a735-ce5eb75135f4\" (UID: \"0cca283f-f80d-4219-a735-ce5eb75135f4\") "
	Jul 30 02:33:26 addons-261813 kubelet[1543]: I0730 02:33:26.386136    1543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cca283f-f80d-4219-a735-ce5eb75135f4-kube-api-access-mphxj" (OuterVolumeSpecName: "kube-api-access-mphxj") pod "0cca283f-f80d-4219-a735-ce5eb75135f4" (UID: "0cca283f-f80d-4219-a735-ce5eb75135f4"). InnerVolumeSpecName "kube-api-access-mphxj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 30 02:33:26 addons-261813 kubelet[1543]: I0730 02:33:26.484936    1543 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mphxj\" (UniqueName: \"kubernetes.io/projected/0cca283f-f80d-4219-a735-ce5eb75135f4-kube-api-access-mphxj\") on node \"addons-261813\" DevicePath \"\""
	Jul 30 02:33:26 addons-261813 kubelet[1543]: I0730 02:33:26.834369    1543 scope.go:117] "RemoveContainer" containerID="de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3"
	Jul 30 02:33:26 addons-261813 kubelet[1543]: I0730 02:33:26.859882    1543 scope.go:117] "RemoveContainer" containerID="de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3"
	Jul 30 02:33:26 addons-261813 kubelet[1543]: E0730 02:33:26.860546    1543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3\": container with ID starting with de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3 not found: ID does not exist" containerID="de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3"
	Jul 30 02:33:26 addons-261813 kubelet[1543]: I0730 02:33:26.860587    1543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3"} err="failed to get container status \"de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3\": rpc error: code = NotFound desc = could not find container \"de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3\": container with ID starting with de4645b268a7b41642b3e248d95ae7deffdfc547204555648e64dc496c52afb3 not found: ID does not exist"
	Jul 30 02:33:26 addons-261813 kubelet[1543]: I0730 02:33:26.870186    1543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-sfmbr" podStartSLOduration=0.802499312 podStartE2EDuration="1.870067819s" podCreationTimestamp="2024-07-30 02:33:25 +0000 UTC" firstStartedPulling="2024-07-30 02:33:25.505241706 +0000 UTC m=+424.999605301" lastFinishedPulling="2024-07-30 02:33:26.572810213 +0000 UTC m=+426.067173808" observedRunningTime="2024-07-30 02:33:26.869589692 +0000 UTC m=+426.363953287" watchObservedRunningTime="2024-07-30 02:33:26.870067819 +0000 UTC m=+426.364431422"
	Jul 30 02:33:28 addons-261813 kubelet[1543]: I0730 02:33:28.687609    1543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0836205e-1fe5-4ffa-bbb0-004f33f29c79" path="/var/lib/kubelet/pods/0836205e-1fe5-4ffa-bbb0-004f33f29c79/volumes"
	Jul 30 02:33:28 addons-261813 kubelet[1543]: I0730 02:33:28.688169    1543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cca283f-f80d-4219-a735-ce5eb75135f4" path="/var/lib/kubelet/pods/0cca283f-f80d-4219-a735-ce5eb75135f4/volumes"
	Jul 30 02:33:28 addons-261813 kubelet[1543]: I0730 02:33:28.688536    1543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ce5c03-599c-47a2-9f4d-932efae7d5d0" path="/var/lib/kubelet/pods/48ce5c03-599c-47a2-9f4d-932efae7d5d0/volumes"
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.815339    1543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c03fe45-826c-4247-8a5a-13cc26a231ad-webhook-cert\") pod \"1c03fe45-826c-4247-8a5a-13cc26a231ad\" (UID: \"1c03fe45-826c-4247-8a5a-13cc26a231ad\") "
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.815398    1543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dzvt\" (UniqueName: \"kubernetes.io/projected/1c03fe45-826c-4247-8a5a-13cc26a231ad-kube-api-access-6dzvt\") pod \"1c03fe45-826c-4247-8a5a-13cc26a231ad\" (UID: \"1c03fe45-826c-4247-8a5a-13cc26a231ad\") "
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.817469    1543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c03fe45-826c-4247-8a5a-13cc26a231ad-kube-api-access-6dzvt" (OuterVolumeSpecName: "kube-api-access-6dzvt") pod "1c03fe45-826c-4247-8a5a-13cc26a231ad" (UID: "1c03fe45-826c-4247-8a5a-13cc26a231ad"). InnerVolumeSpecName "kube-api-access-6dzvt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.822760    1543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c03fe45-826c-4247-8a5a-13cc26a231ad-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1c03fe45-826c-4247-8a5a-13cc26a231ad" (UID: "1c03fe45-826c-4247-8a5a-13cc26a231ad"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.846855    1543 scope.go:117] "RemoveContainer" containerID="a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963"
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.865498    1543 scope.go:117] "RemoveContainer" containerID="a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963"
	Jul 30 02:33:30 addons-261813 kubelet[1543]: E0730 02:33:30.865896    1543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963\": container with ID starting with a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963 not found: ID does not exist" containerID="a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963"
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.865933    1543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963"} err="failed to get container status \"a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963\": rpc error: code = NotFound desc = could not find container \"a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963\": container with ID starting with a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963 not found: ID does not exist"
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.916603    1543 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c03fe45-826c-4247-8a5a-13cc26a231ad-webhook-cert\") on node \"addons-261813\" DevicePath \"\""
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.916645    1543 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6dzvt\" (UniqueName: \"kubernetes.io/projected/1c03fe45-826c-4247-8a5a-13cc26a231ad-kube-api-access-6dzvt\") on node \"addons-261813\" DevicePath \"\""
	Jul 30 02:33:32 addons-261813 kubelet[1543]: I0730 02:33:32.687607    1543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c03fe45-826c-4247-8a5a-13cc26a231ad" path="/var/lib/kubelet/pods/1c03fe45-826c-4247-8a5a-13cc26a231ad/volumes"
	
	
	==> storage-provisioner [a3dea84fe5c9b07b64b400072c6d1439ccfdd9c582dff8f4770a92b706c7dcf4] <==
	I0730 02:27:22.487385       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0730 02:27:22.505591       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0730 02:27:22.505727       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0730 02:27:22.531948       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0730 02:27:22.532491       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa5818b7-4691-4ab8-8ae6-79fc45b09a77", APIVersion:"v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-261813_8cc6457e-2fa4-461d-ae7a-ff1f1e7c4e88 became leader
	I0730 02:27:22.532710       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-261813_8cc6457e-2fa4-461d-ae7a-ff1f1e7c4e88!
	I0730 02:27:22.640193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-261813_8cc6457e-2fa4-461d-ae7a-ff1f1e7c4e88!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-261813 -n addons-261813
helpers_test.go:261: (dbg) Run:  kubectl --context addons-261813 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (310.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 6.297484ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-8rfsg" [fac509e3-535c-40c1-ad6c-61226795aa5e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004261004s
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (96.625949ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 4m19.794671345s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (201.670301ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 4m24.256076877s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (111.935949ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 4m30.76256386s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (87.786571ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 4m37.085233816s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (86.126407ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 4m48.221222884s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (86.068398ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 4m59.75715282s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (89.524787ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 5m32.720457451s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (84.468685ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 5m51.84846112s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (90.540351ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 6m21.033306254s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (86.816379ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 7m42.953926298s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (88.260332ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 8m27.642267543s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-261813 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-261813 top pods -n kube-system: exit status 1 (84.047576ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-l22tb, age: 9m21.785393486s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-261813
helpers_test.go:235: (dbg) docker inspect addons-261813:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90",
	        "Created": "2024-07-30T02:25:57.238468722Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1599468,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-30T02:25:57.370462158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90/hostname",
	        "HostsPath": "/var/lib/docker/containers/8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90/hosts",
	        "LogPath": "/var/lib/docker/containers/8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90/8224a32cc06fdbd60aa04ed905ff45e4a5aefd3ac14f8d7f1f0b6b32322ccd90-json.log",
	        "Name": "/addons-261813",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-261813:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-261813",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5d4252b6024d7c7732271e66ddfc0c8aa6cc619be7a19c78bf8584a175a0612d-init/diff:/var/lib/docker/overlay2/acd0679734de498ee4da989a39c292c935753fd7c8a4808d283ba27465852ac6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d4252b6024d7c7732271e66ddfc0c8aa6cc619be7a19c78bf8584a175a0612d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d4252b6024d7c7732271e66ddfc0c8aa6cc619be7a19c78bf8584a175a0612d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d4252b6024d7c7732271e66ddfc0c8aa6cc619be7a19c78bf8584a175a0612d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-261813",
	                "Source": "/var/lib/docker/volumes/addons-261813/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-261813",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-261813",
	                "name.minikube.sigs.k8s.io": "addons-261813",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a5a1267ebe74a11cfab5819a02bafb2c19ce9fedfd6414269bd28c5cfcff0f5",
	            "SandboxKey": "/var/run/docker/netns/2a5a1267ebe7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38883"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38884"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38887"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38885"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38886"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-261813": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0521bbec070a01539d5e070e1b1fa985de506ae39d48ba4860e79902f78cfc2d",
	                    "EndpointID": "d5cb5c39432fd035122d1b99c8b651245dd2b8032ba2f6eda261cf7f02b8ea6d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-261813",
	                        "8224a32cc06f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-261813 -n addons-261813
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-261813 logs -n 25: (1.686484212s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-154092 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | download-docker-154092                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-154092                                                                   | download-docker-154092 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-628859   | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | binary-mirror-628859                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42821                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-628859                                                                     | binary-mirror-628859   | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| addons  | disable dashboard -p                                                                        | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | addons-261813                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | addons-261813                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-261813 --wait=true                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:29 UTC | 30 Jul 24 02:29 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-261813 ip                                                                            | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:29 UTC | 30 Jul 24 02:29 UTC |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:29 UTC | 30 Jul 24 02:29 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:29 UTC | 30 Jul 24 02:29 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:29 UTC | 30 Jul 24 02:29 UTC |
	|         | -p addons-261813                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-261813 ssh cat                                                                       | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | /opt/local-path-provisioner/pvc-c44108cc-c5e9-43dd-8069-916608c7b030_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-261813 addons                                                                        | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-261813 addons                                                                        | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | addons-261813                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | -p addons-261813                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:30 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:30 UTC | 30 Jul 24 02:31 UTC |
	|         | addons-261813                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-261813 ssh curl -s                                                                   | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-261813 ip                                                                            | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:33 UTC | 30 Jul 24 02:33 UTC |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:33 UTC | 30 Jul 24 02:33 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-261813 addons disable                                                                | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:33 UTC | 30 Jul 24 02:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-261813 addons                                                                        | addons-261813          | jenkins | v1.33.1 | 30 Jul 24 02:35 UTC | 30 Jul 24 02:35 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 02:25:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 02:25:33.267887 1598980 out.go:291] Setting OutFile to fd 1 ...
	I0730 02:25:33.268101 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:25:33.268114 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:25:33.268120 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:25:33.268379 1598980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 02:25:33.268849 1598980 out.go:298] Setting JSON to false
	I0730 02:25:33.269803 1598980 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":86879,"bootTime":1722219454,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0730 02:25:33.269873 1598980 start.go:139] virtualization:  
	I0730 02:25:33.272311 1598980 out.go:177] * [addons-261813] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0730 02:25:33.273952 1598980 out.go:177]   - MINIKUBE_LOCATION=19348
	I0730 02:25:33.274118 1598980 notify.go:220] Checking for updates...
	I0730 02:25:33.277820 1598980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 02:25:33.279485 1598980 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:25:33.281156 1598980 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	I0730 02:25:33.282793 1598980 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0730 02:25:33.284365 1598980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 02:25:33.286188 1598980 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 02:25:33.306398 1598980 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0730 02:25:33.306517 1598980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:25:33.378365 1598980 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-30 02:25:33.369398448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:25:33.378509 1598980 docker.go:307] overlay module found
	I0730 02:25:33.380598 1598980 out.go:177] * Using the docker driver based on user configuration
	I0730 02:25:33.382552 1598980 start.go:297] selected driver: docker
	I0730 02:25:33.382567 1598980 start.go:901] validating driver "docker" against <nil>
	I0730 02:25:33.382582 1598980 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 02:25:33.383222 1598980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:25:33.436494 1598980 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-30 02:25:33.427878111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:25:33.436667 1598980 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 02:25:33.436907 1598980 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 02:25:33.438724 1598980 out.go:177] * Using Docker driver with root privileges
	I0730 02:25:33.440325 1598980 cni.go:84] Creating CNI manager for ""
	I0730 02:25:33.440343 1598980 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0730 02:25:33.440354 1598980 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0730 02:25:33.440434 1598980 start.go:340] cluster config:
	{Name:addons-261813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-261813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:25:33.442385 1598980 out.go:177] * Starting "addons-261813" primary control-plane node in "addons-261813" cluster
	I0730 02:25:33.444063 1598980 cache.go:121] Beginning downloading kic base image for docker with crio
	I0730 02:25:33.445819 1598980 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0730 02:25:33.447460 1598980 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:25:33.447468 1598980 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0730 02:25:33.447512 1598980 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0730 02:25:33.447536 1598980 cache.go:56] Caching tarball of preloaded images
	I0730 02:25:33.447614 1598980 preload.go:172] Found /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0730 02:25:33.447624 1598980 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 02:25:33.448109 1598980 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/config.json ...
	I0730 02:25:33.448137 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/config.json: {Name:mk7399907658f87ccf6a0807cd3f6657d864c095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:25:33.461877 1598980 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0730 02:25:33.461997 1598980 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0730 02:25:33.462035 1598980 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0730 02:25:33.462045 1598980 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0730 02:25:33.462054 1598980 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0730 02:25:33.462060 1598980 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0730 02:25:50.400606 1598980 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0730 02:25:50.400661 1598980 cache.go:194] Successfully downloaded all kic artifacts
	I0730 02:25:50.400704 1598980 start.go:360] acquireMachinesLock for addons-261813: {Name:mk6ed76ff4a7e22da2e04cc04fb41fd5cadc013c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 02:25:50.401451 1598980 start.go:364] duration metric: took 716.899µs to acquireMachinesLock for "addons-261813"
	I0730 02:25:50.401494 1598980 start.go:93] Provisioning new machine with config: &{Name:addons-261813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-261813 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 02:25:50.401585 1598980 start.go:125] createHost starting for "" (driver="docker")
	I0730 02:25:50.403873 1598980 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0730 02:25:50.404184 1598980 start.go:159] libmachine.API.Create for "addons-261813" (driver="docker")
	I0730 02:25:50.404226 1598980 client.go:168] LocalClient.Create starting
	I0730 02:25:50.404345 1598980 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem
	I0730 02:25:50.591006 1598980 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem
	I0730 02:25:50.816896 1598980 cli_runner.go:164] Run: docker network inspect addons-261813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0730 02:25:50.834595 1598980 cli_runner.go:211] docker network inspect addons-261813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0730 02:25:50.834702 1598980 network_create.go:284] running [docker network inspect addons-261813] to gather additional debugging logs...
	I0730 02:25:50.834724 1598980 cli_runner.go:164] Run: docker network inspect addons-261813
	W0730 02:25:50.850774 1598980 cli_runner.go:211] docker network inspect addons-261813 returned with exit code 1
	I0730 02:25:50.850809 1598980 network_create.go:287] error running [docker network inspect addons-261813]: docker network inspect addons-261813: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-261813 not found
	I0730 02:25:50.850822 1598980 network_create.go:289] output of [docker network inspect addons-261813]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-261813 not found
	
	** /stderr **
	I0730 02:25:50.850930 1598980 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0730 02:25:50.866360 1598980 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ec930}
	I0730 02:25:50.866402 1598980 network_create.go:124] attempt to create docker network addons-261813 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0730 02:25:50.866466 1598980 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-261813 addons-261813
	I0730 02:25:50.933600 1598980 network_create.go:108] docker network addons-261813 192.168.49.0/24 created
	I0730 02:25:50.933633 1598980 kic.go:121] calculated static IP "192.168.49.2" for the "addons-261813" container
	I0730 02:25:50.933710 1598980 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0730 02:25:50.949234 1598980 cli_runner.go:164] Run: docker volume create addons-261813 --label name.minikube.sigs.k8s.io=addons-261813 --label created_by.minikube.sigs.k8s.io=true
	I0730 02:25:50.965769 1598980 oci.go:103] Successfully created a docker volume addons-261813
	I0730 02:25:50.965863 1598980 cli_runner.go:164] Run: docker run --rm --name addons-261813-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-261813 --entrypoint /usr/bin/test -v addons-261813:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0730 02:25:52.955911 1598980 cli_runner.go:217] Completed: docker run --rm --name addons-261813-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-261813 --entrypoint /usr/bin/test -v addons-261813:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib: (1.989986661s)
	I0730 02:25:52.955945 1598980 oci.go:107] Successfully prepared a docker volume addons-261813
	I0730 02:25:52.956017 1598980 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:25:52.956042 1598980 kic.go:194] Starting extracting preloaded images to volume ...
	I0730 02:25:52.956139 1598980 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-261813:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0730 02:25:57.172109 1598980 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-261813:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (4.215930181s)
	I0730 02:25:57.172143 1598980 kic.go:203] duration metric: took 4.216095273s to extract preloaded images to volume ...
	W0730 02:25:57.172290 1598980 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0730 02:25:57.172400 1598980 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0730 02:25:57.223692 1598980 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-261813 --name addons-261813 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-261813 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-261813 --network addons-261813 --ip 192.168.49.2 --volume addons-261813:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0730 02:25:57.561044 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Running}}
	I0730 02:25:57.585912 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:25:57.620922 1598980 cli_runner.go:164] Run: docker exec addons-261813 stat /var/lib/dpkg/alternatives/iptables
	I0730 02:25:57.686302 1598980 oci.go:144] the created container "addons-261813" has a running status.
	I0730 02:25:57.686330 1598980 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa...
	I0730 02:25:58.641209 1598980 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0730 02:25:58.663951 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:25:58.691203 1598980 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0730 02:25:58.691224 1598980 kic_runner.go:114] Args: [docker exec --privileged addons-261813 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0730 02:25:58.739189 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:25:58.757904 1598980 machine.go:94] provisionDockerMachine start ...
	I0730 02:25:58.758014 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:25:58.775832 1598980 main.go:141] libmachine: Using SSH client type: native
	I0730 02:25:58.776163 1598980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38883 <nil> <nil>}
	I0730 02:25:58.776179 1598980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 02:25:58.907366 1598980 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-261813
	
	I0730 02:25:58.907410 1598980 ubuntu.go:169] provisioning hostname "addons-261813"
	I0730 02:25:58.907481 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:25:58.924976 1598980 main.go:141] libmachine: Using SSH client type: native
	I0730 02:25:58.925229 1598980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38883 <nil> <nil>}
	I0730 02:25:58.925247 1598980 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-261813 && echo "addons-261813" | sudo tee /etc/hostname
	I0730 02:25:59.076295 1598980 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-261813
	
	I0730 02:25:59.076417 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:25:59.093810 1598980 main.go:141] libmachine: Using SSH client type: native
	I0730 02:25:59.094057 1598980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38883 <nil> <nil>}
	I0730 02:25:59.094079 1598980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-261813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-261813/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-261813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 02:25:59.228058 1598980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 02:25:59.228093 1598980 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19348-1592571/.minikube CaCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19348-1592571/.minikube}
	I0730 02:25:59.228121 1598980 ubuntu.go:177] setting up certificates
	I0730 02:25:59.228131 1598980 provision.go:84] configureAuth start
	I0730 02:25:59.228208 1598980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-261813
	I0730 02:25:59.250015 1598980 provision.go:143] copyHostCerts
	I0730 02:25:59.250102 1598980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem (1078 bytes)
	I0730 02:25:59.250245 1598980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem (1123 bytes)
	I0730 02:25:59.250308 1598980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem (1675 bytes)
	I0730 02:25:59.250357 1598980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem org=jenkins.addons-261813 san=[127.0.0.1 192.168.49.2 addons-261813 localhost minikube]
	I0730 02:26:00.339017 1598980 provision.go:177] copyRemoteCerts
	I0730 02:26:00.339100 1598980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 02:26:00.339148 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:00.360181 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:00.458933 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 02:26:00.484366 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0730 02:26:00.509092 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0730 02:26:00.533025 1598980 provision.go:87] duration metric: took 1.304874627s to configureAuth
	I0730 02:26:00.533108 1598980 ubuntu.go:193] setting minikube options for container-runtime
	I0730 02:26:00.533337 1598980 config.go:182] Loaded profile config "addons-261813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:26:00.533458 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:00.552949 1598980 main.go:141] libmachine: Using SSH client type: native
	I0730 02:26:00.553213 1598980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38883 <nil> <nil>}
	I0730 02:26:00.553236 1598980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 02:26:00.795238 1598980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 02:26:00.795263 1598980 machine.go:97] duration metric: took 2.03733692s to provisionDockerMachine
	I0730 02:26:00.795275 1598980 client.go:171] duration metric: took 10.391033603s to LocalClient.Create
	I0730 02:26:00.795293 1598980 start.go:167] duration metric: took 10.391111197s to libmachine.API.Create "addons-261813"
	I0730 02:26:00.795300 1598980 start.go:293] postStartSetup for "addons-261813" (driver="docker")
	I0730 02:26:00.795311 1598980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 02:26:00.795376 1598980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 02:26:00.795439 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:00.813108 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:00.909155 1598980 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 02:26:00.912272 1598980 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0730 02:26:00.912359 1598980 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0730 02:26:00.912388 1598980 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0730 02:26:00.912410 1598980 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0730 02:26:00.912456 1598980 filesync.go:126] Scanning /home/jenkins/minikube-integration/19348-1592571/.minikube/addons for local assets ...
	I0730 02:26:00.912562 1598980 filesync.go:126] Scanning /home/jenkins/minikube-integration/19348-1592571/.minikube/files for local assets ...
	I0730 02:26:00.912622 1598980 start.go:296] duration metric: took 117.31435ms for postStartSetup
	I0730 02:26:00.913016 1598980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-261813
	I0730 02:26:00.930980 1598980 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/config.json ...
	I0730 02:26:00.931289 1598980 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 02:26:00.931349 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:00.950806 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:01.045455 1598980 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0730 02:26:01.050183 1598980 start.go:128] duration metric: took 10.648581174s to createHost
	I0730 02:26:01.050209 1598980 start.go:83] releasing machines lock for "addons-261813", held for 10.648735469s
	I0730 02:26:01.050284 1598980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-261813
	I0730 02:26:01.066929 1598980 ssh_runner.go:195] Run: cat /version.json
	I0730 02:26:01.066994 1598980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 02:26:01.067089 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:01.066997 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:01.093783 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:01.101572 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:01.368278 1598980 ssh_runner.go:195] Run: systemctl --version
	I0730 02:26:01.372848 1598980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 02:26:01.517465 1598980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0730 02:26:01.521974 1598980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 02:26:01.545656 1598980 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0730 02:26:01.545762 1598980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 02:26:01.581879 1598980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0730 02:26:01.581916 1598980 start.go:495] detecting cgroup driver to use...
	I0730 02:26:01.581952 1598980 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0730 02:26:01.582019 1598980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 02:26:01.599097 1598980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 02:26:01.612002 1598980 docker.go:217] disabling cri-docker service (if available) ...
	I0730 02:26:01.612076 1598980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 02:26:01.628294 1598980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 02:26:01.644915 1598980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 02:26:01.741736 1598980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 02:26:01.840061 1598980 docker.go:233] disabling docker service ...
	I0730 02:26:01.840139 1598980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 02:26:01.860581 1598980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 02:26:01.874213 1598980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 02:26:01.970821 1598980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 02:26:02.078050 1598980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 02:26:02.090982 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 02:26:02.107904 1598980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 02:26:02.107992 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.119230 1598980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 02:26:02.119317 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.129958 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.139748 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.150179 1598980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 02:26:02.159605 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.169630 1598980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.185874 1598980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:26:02.196187 1598980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 02:26:02.205121 1598980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 02:26:02.213536 1598980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:26:02.302719 1598980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 02:26:02.426250 1598980 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 02:26:02.426334 1598980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 02:26:02.430009 1598980 start.go:563] Will wait 60s for crictl version
	I0730 02:26:02.430072 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:26:02.433710 1598980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 02:26:02.473009 1598980 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0730 02:26:02.473118 1598980 ssh_runner.go:195] Run: crio --version
	I0730 02:26:02.509905 1598980 ssh_runner.go:195] Run: crio --version
	I0730 02:26:02.552156 1598980 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0730 02:26:02.553855 1598980 cli_runner.go:164] Run: docker network inspect addons-261813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0730 02:26:02.571690 1598980 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0730 02:26:02.575117 1598980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 02:26:02.585641 1598980 kubeadm.go:883] updating cluster {Name:addons-261813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-261813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 02:26:02.585788 1598980 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:26:02.585855 1598980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 02:26:02.658019 1598980 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 02:26:02.658043 1598980 crio.go:433] Images already preloaded, skipping extraction
	I0730 02:26:02.658103 1598980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 02:26:02.695842 1598980 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 02:26:02.695865 1598980 cache_images.go:84] Images are preloaded, skipping loading
	I0730 02:26:02.695874 1598980 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0730 02:26:02.695991 1598980 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-261813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-261813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 02:26:02.696109 1598980 ssh_runner.go:195] Run: crio config
	I0730 02:26:02.761096 1598980 cni.go:84] Creating CNI manager for ""
	I0730 02:26:02.761116 1598980 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0730 02:26:02.761125 1598980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 02:26:02.761154 1598980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-261813 NodeName:addons-261813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 02:26:02.761354 1598980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-261813"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 02:26:02.761448 1598980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 02:26:02.771112 1598980 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 02:26:02.771207 1598980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0730 02:26:02.781369 1598980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0730 02:26:02.799173 1598980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 02:26:02.816985 1598980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0730 02:26:02.834750 1598980 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0730 02:26:02.838241 1598980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 02:26:02.848846 1598980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:26:02.930283 1598980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 02:26:02.943888 1598980 certs.go:68] Setting up /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813 for IP: 192.168.49.2
	I0730 02:26:02.943911 1598980 certs.go:194] generating shared ca certs ...
	I0730 02:26:02.943928 1598980 certs.go:226] acquiring lock for ca certs: {Name:mkd188f515cf1f581cef2c6a3cc946da59d73d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:02.944645 1598980 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key
	I0730 02:26:03.109734 1598980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt ...
	I0730 02:26:03.109768 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt: {Name:mkd023154b3e5573ffc40cf3fc0f85147ef040f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.110888 1598980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key ...
	I0730 02:26:03.110906 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key: {Name:mk5d98b96345e33944ba25ec706238280b86654e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.111717 1598980 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key
	I0730 02:26:03.221829 1598980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt ...
	I0730 02:26:03.221864 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt: {Name:mk1d57f51e38294f831619c296cd8e2d620e0692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.222066 1598980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key ...
	I0730 02:26:03.222079 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key: {Name:mkf385c7237e9a43047376f3104a113068da5114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.222645 1598980 certs.go:256] generating profile certs ...
	I0730 02:26:03.222711 1598980 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.key
	I0730 02:26:03.222730 1598980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt with IP's: []
	I0730 02:26:03.644519 1598980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt ...
	I0730 02:26:03.644556 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: {Name:mkf3448e81372a1b80d94e46e809da85032a15b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.645327 1598980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.key ...
	I0730 02:26:03.645346 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.key: {Name:mk7c8cf1b9e9a92c64baad47c58ea1361cea285d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:03.645450 1598980 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key.76b43c53
	I0730 02:26:03.645472 1598980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt.76b43c53 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0730 02:26:04.098132 1598980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt.76b43c53 ...
	I0730 02:26:04.098169 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt.76b43c53: {Name:mk88d85e52611170516cc4f640305dea7464276f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:04.098379 1598980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key.76b43c53 ...
	I0730 02:26:04.098399 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key.76b43c53: {Name:mk2e7ad47416e67b41dbc11e0500d74bc0af2676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:04.098523 1598980 certs.go:381] copying /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt.76b43c53 -> /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt
	I0730 02:26:04.098614 1598980 certs.go:385] copying /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key.76b43c53 -> /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key
	I0730 02:26:04.098723 1598980 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.key
	I0730 02:26:04.098745 1598980 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.crt with IP's: []
	I0730 02:26:04.618955 1598980 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.crt ...
	I0730 02:26:04.618989 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.crt: {Name:mk424ecb4f85199cb0be767926743e984c77f8eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:04.619273 1598980 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.key ...
	I0730 02:26:04.619291 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.key: {Name:mke95e26c260749605364ba06ec1e8050f03ffbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:04.620254 1598980 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 02:26:04.620315 1598980 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem (1078 bytes)
	I0730 02:26:04.620344 1598980 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem (1123 bytes)
	I0730 02:26:04.620373 1598980 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem (1675 bytes)
	I0730 02:26:04.621059 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 02:26:04.646978 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0730 02:26:04.670894 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 02:26:04.695105 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0730 02:26:04.719505 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0730 02:26:04.744249 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0730 02:26:04.768188 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 02:26:04.793219 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0730 02:26:04.817137 1598980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 02:26:04.841521 1598980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 02:26:04.859576 1598980 ssh_runner.go:195] Run: openssl version
	I0730 02:26:04.865123 1598980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 02:26:04.874527 1598980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:26:04.878096 1598980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:26:04.878202 1598980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:26:04.885142 1598980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 02:26:04.894756 1598980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 02:26:04.898304 1598980 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 02:26:04.898355 1598980 kubeadm.go:392] StartCluster: {Name:addons-261813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-261813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:26:04.898434 1598980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 02:26:04.898656 1598980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 02:26:04.939207 1598980 cri.go:89] found id: ""
	I0730 02:26:04.939291 1598980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0730 02:26:04.947992 1598980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0730 02:26:04.956588 1598980 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0730 02:26:04.956678 1598980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0730 02:26:04.967162 1598980 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0730 02:26:04.967190 1598980 kubeadm.go:157] found existing configuration files:
	
	I0730 02:26:04.967244 1598980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0730 02:26:04.975754 1598980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0730 02:26:04.975865 1598980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0730 02:26:04.984299 1598980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0730 02:26:04.992671 1598980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0730 02:26:04.992760 1598980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0730 02:26:05.001992 1598980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0730 02:26:05.013100 1598980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0730 02:26:05.013176 1598980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0730 02:26:05.023217 1598980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0730 02:26:05.032884 1598980 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0730 02:26:05.032956 1598980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0730 02:26:05.041916 1598980 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0730 02:26:05.088384 1598980 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0730 02:26:05.088792 1598980 kubeadm.go:310] [preflight] Running pre-flight checks
	I0730 02:26:05.128720 1598980 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0730 02:26:05.128859 1598980 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1065-aws
	I0730 02:26:05.128913 1598980 kubeadm.go:310] OS: Linux
	I0730 02:26:05.128986 1598980 kubeadm.go:310] CGROUPS_CPU: enabled
	I0730 02:26:05.129068 1598980 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0730 02:26:05.129140 1598980 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0730 02:26:05.129217 1598980 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0730 02:26:05.129292 1598980 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0730 02:26:05.129416 1598980 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0730 02:26:05.129495 1598980 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0730 02:26:05.129576 1598980 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0730 02:26:05.129653 1598980 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0730 02:26:05.196323 1598980 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0730 02:26:05.196587 1598980 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0730 02:26:05.196722 1598980 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0730 02:26:05.448528 1598980 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0730 02:26:05.451749 1598980 out.go:204]   - Generating certificates and keys ...
	I0730 02:26:05.451944 1598980 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0730 02:26:05.452084 1598980 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0730 02:26:05.881971 1598980 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0730 02:26:06.395712 1598980 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0730 02:26:06.652234 1598980 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0730 02:26:07.301158 1598980 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0730 02:26:08.666808 1598980 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0730 02:26:08.667128 1598980 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-261813 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0730 02:26:09.621643 1598980 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0730 02:26:09.621931 1598980 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-261813 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0730 02:26:09.968295 1598980 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0730 02:26:10.630071 1598980 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0730 02:26:10.826805 1598980 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0730 02:26:10.827056 1598980 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0730 02:26:11.370118 1598980 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0730 02:26:11.894074 1598980 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0730 02:26:12.119229 1598980 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0730 02:26:12.370498 1598980 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0730 02:26:12.798192 1598980 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0730 02:26:12.798772 1598980 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0730 02:26:12.801768 1598980 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0730 02:26:12.804051 1598980 out.go:204]   - Booting up control plane ...
	I0730 02:26:12.804170 1598980 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0730 02:26:12.804254 1598980 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0730 02:26:12.805066 1598980 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0730 02:26:12.830409 1598980 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0730 02:26:12.832317 1598980 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0730 02:26:12.832390 1598980 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0730 02:26:12.924871 1598980 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0730 02:26:12.924959 1598980 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0730 02:26:13.925581 1598980 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000801637s
	I0730 02:26:13.925691 1598980 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0730 02:26:19.927861 1598980 kubeadm.go:310] [api-check] The API server is healthy after 6.002270246s
	I0730 02:26:19.946595 1598980 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0730 02:26:19.960246 1598980 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0730 02:26:19.983539 1598980 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0730 02:26:19.983736 1598980 kubeadm.go:310] [mark-control-plane] Marking the node addons-261813 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0730 02:26:19.995037 1598980 kubeadm.go:310] [bootstrap-token] Using token: zfd3n7.au06qsgfsxnz77lt
	I0730 02:26:19.997119 1598980 out.go:204]   - Configuring RBAC rules ...
	I0730 02:26:19.997282 1598980 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0730 02:26:20.014823 1598980 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0730 02:26:20.026266 1598980 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0730 02:26:20.032700 1598980 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0730 02:26:20.038942 1598980 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0730 02:26:20.043190 1598980 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0730 02:26:20.334919 1598980 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0730 02:26:20.763790 1598980 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0730 02:26:21.334325 1598980 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0730 02:26:21.335683 1598980 kubeadm.go:310] 
	I0730 02:26:21.335773 1598980 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0730 02:26:21.335785 1598980 kubeadm.go:310] 
	I0730 02:26:21.335887 1598980 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0730 02:26:21.335896 1598980 kubeadm.go:310] 
	I0730 02:26:21.335927 1598980 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0730 02:26:21.336025 1598980 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0730 02:26:21.336101 1598980 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0730 02:26:21.336115 1598980 kubeadm.go:310] 
	I0730 02:26:21.336176 1598980 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0730 02:26:21.336182 1598980 kubeadm.go:310] 
	I0730 02:26:21.336255 1598980 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0730 02:26:21.336283 1598980 kubeadm.go:310] 
	I0730 02:26:21.336345 1598980 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0730 02:26:21.336453 1598980 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0730 02:26:21.336537 1598980 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0730 02:26:21.336565 1598980 kubeadm.go:310] 
	I0730 02:26:21.336742 1598980 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0730 02:26:21.336828 1598980 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0730 02:26:21.336840 1598980 kubeadm.go:310] 
	I0730 02:26:21.336928 1598980 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zfd3n7.au06qsgfsxnz77lt \
	I0730 02:26:21.337048 1598980 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57de0a3c7c2240aa1874003464848c868dfbdf86454d09acc1d3772ff2d3bc49 \
	I0730 02:26:21.337097 1598980 kubeadm.go:310] 	--control-plane 
	I0730 02:26:21.337106 1598980 kubeadm.go:310] 
	I0730 02:26:21.337202 1598980 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0730 02:26:21.337217 1598980 kubeadm.go:310] 
	I0730 02:26:21.337319 1598980 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zfd3n7.au06qsgfsxnz77lt \
	I0730 02:26:21.337449 1598980 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57de0a3c7c2240aa1874003464848c868dfbdf86454d09acc1d3772ff2d3bc49 
	I0730 02:26:21.341662 1598980 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1065-aws\n", err: exit status 1
	I0730 02:26:21.341780 1598980 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0730 02:26:21.341802 1598980 cni.go:84] Creating CNI manager for ""
	I0730 02:26:21.341810 1598980 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0730 02:26:21.344966 1598980 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0730 02:26:21.346611 1598980 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0730 02:26:21.350859 1598980 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0730 02:26:21.350883 1598980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0730 02:26:21.369409 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0730 02:26:21.633648 1598980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0730 02:26:21.633787 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:21.633876 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-261813 minikube.k8s.io/updated_at=2024_07_30T02_26_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9 minikube.k8s.io/name=addons-261813 minikube.k8s.io/primary=true
	I0730 02:26:21.778549 1598980 ops.go:34] apiserver oom_adj: -16
	I0730 02:26:21.778647 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:22.278809 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:22.778823 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:23.279055 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:23.779028 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:24.279326 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:24.779595 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:25.279518 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:25.778804 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:26.278878 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:26.779228 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:27.278839 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:27.779389 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:28.279347 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:28.779421 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:29.279568 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:29.779284 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:30.279362 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:30.778891 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:31.279272 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:31.779245 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:32.278841 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:32.778762 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:33.279503 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:33.779609 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:34.279502 1598980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0730 02:26:34.364065 1598980 kubeadm.go:1113] duration metric: took 12.730327479s to wait for elevateKubeSystemPrivileges
	I0730 02:26:34.364106 1598980 kubeadm.go:394] duration metric: took 29.465756511s to StartCluster
	I0730 02:26:34.364124 1598980 settings.go:142] acquiring lock: {Name:mk63e25bcb01770839277a929f9ba49ce5be4445 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:34.364759 1598980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:26:34.365154 1598980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/kubeconfig: {Name:mk572b463a11a946de92ccc491c42330cd76de64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:26:34.365353 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0730 02:26:34.365381 1598980 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 02:26:34.365620 1598980 config.go:182] Loaded profile config "addons-261813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:26:34.365650 1598980 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0730 02:26:34.365732 1598980 addons.go:69] Setting yakd=true in profile "addons-261813"
	I0730 02:26:34.365761 1598980 addons.go:234] Setting addon yakd=true in "addons-261813"
	I0730 02:26:34.365772 1598980 addons.go:69] Setting inspektor-gadget=true in profile "addons-261813"
	I0730 02:26:34.365787 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.365795 1598980 addons.go:234] Setting addon inspektor-gadget=true in "addons-261813"
	I0730 02:26:34.365833 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.366207 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.366323 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.366630 1598980 addons.go:69] Setting metrics-server=true in profile "addons-261813"
	I0730 02:26:34.366660 1598980 addons.go:234] Setting addon metrics-server=true in "addons-261813"
	I0730 02:26:34.366685 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.367057 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.369073 1598980 addons.go:69] Setting cloud-spanner=true in profile "addons-261813"
	I0730 02:26:34.369331 1598980 addons.go:234] Setting addon cloud-spanner=true in "addons-261813"
	I0730 02:26:34.369386 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.369558 1598980 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-261813"
	I0730 02:26:34.369580 1598980 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-261813"
	I0730 02:26:34.369598 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.369962 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.370597 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.369222 1598980 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-261813"
	I0730 02:26:34.369232 1598980 addons.go:69] Setting default-storageclass=true in profile "addons-261813"
	I0730 02:26:34.369246 1598980 addons.go:69] Setting gcp-auth=true in profile "addons-261813"
	I0730 02:26:34.369253 1598980 addons.go:69] Setting ingress=true in profile "addons-261813"
	I0730 02:26:34.369259 1598980 addons.go:69] Setting ingress-dns=true in profile "addons-261813"
	I0730 02:26:34.370769 1598980 out.go:177] * Verifying Kubernetes components...
	I0730 02:26:34.371070 1598980 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-261813"
	I0730 02:26:34.371083 1598980 addons.go:69] Setting registry=true in profile "addons-261813"
	I0730 02:26:34.371092 1598980 addons.go:69] Setting storage-provisioner=true in profile "addons-261813"
	I0730 02:26:34.371099 1598980 addons.go:69] Setting volcano=true in profile "addons-261813"
	I0730 02:26:34.371106 1598980 addons.go:69] Setting volumesnapshots=true in profile "addons-261813"
	I0730 02:26:34.373035 1598980 addons.go:234] Setting addon volumesnapshots=true in "addons-261813"
	I0730 02:26:34.373086 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.373517 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.378274 1598980 addons.go:234] Setting addon ingress-dns=true in "addons-261813"
	I0730 02:26:34.378335 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.378751 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.396965 1598980 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-261813"
	I0730 02:26:34.397446 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.406007 1598980 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-261813"
	I0730 02:26:34.406059 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.406483 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.412913 1598980 addons.go:234] Setting addon registry=true in "addons-261813"
	I0730 02:26:34.412973 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.413425 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.426662 1598980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-261813"
	I0730 02:26:34.426993 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.436078 1598980 addons.go:234] Setting addon storage-provisioner=true in "addons-261813"
	I0730 02:26:34.436138 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.436666 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.448219 1598980 addons.go:234] Setting addon volcano=true in "addons-261813"
	I0730 02:26:34.448291 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.456186 1598980 mustload.go:65] Loading cluster: addons-261813
	I0730 02:26:34.456378 1598980 config.go:182] Loaded profile config "addons-261813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:26:34.456647 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.487793 1598980 addons.go:234] Setting addon ingress=true in "addons-261813"
	I0730 02:26:34.487860 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.488388 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.509749 1598980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:26:34.511114 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.556587 1598980 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0730 02:26:34.559212 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0730 02:26:34.559285 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0730 02:26:34.559384 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.559561 1598980 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0730 02:26:34.560640 1598980 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0730 02:26:34.561436 1598980 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 02:26:34.561480 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0730 02:26:34.561576 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.592230 1598980 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0730 02:26:34.592295 1598980 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0730 02:26:34.592394 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.593271 1598980 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0730 02:26:34.596733 1598980 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0730 02:26:34.596790 1598980 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0730 02:26:34.596903 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.614666 1598980 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0730 02:26:34.614806 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0730 02:26:34.614812 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0730 02:26:34.617960 1598980 addons.go:234] Setting addon default-storageclass=true in "addons-261813"
	I0730 02:26:34.622372 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.622834 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.644201 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0730 02:26:34.644271 1598980 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0730 02:26:34.644374 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.655181 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.661970 1598980 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-261813"
	I0730 02:26:34.662019 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:34.672065 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:34.701686 1598980 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0730 02:26:34.704172 1598980 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0730 02:26:34.704194 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0730 02:26:34.704274 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.707706 1598980 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0730 02:26:34.709729 1598980 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0730 02:26:34.709751 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0730 02:26:34.709827 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.712863 1598980 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0730 02:26:34.712919 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0730 02:26:34.713017 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.717124 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0730 02:26:34.717552 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.722336 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0730 02:26:34.723124 1598980 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0730 02:26:34.723141 1598980 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0730 02:26:34.723305 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	W0730 02:26:34.744379 1598980 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0730 02:26:34.754588 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0730 02:26:34.757858 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0730 02:26:34.764648 1598980 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0730 02:26:34.772126 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.775688 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0730 02:26:34.775810 1598980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0730 02:26:34.778219 1598980 out.go:177]   - Using image docker.io/registry:2.8.3
	I0730 02:26:34.781394 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0730 02:26:34.781539 1598980 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0730 02:26:34.781557 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0730 02:26:34.781623 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.787025 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0730 02:26:34.801389 1598980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 02:26:34.803180 1598980 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0730 02:26:34.811575 1598980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 02:26:34.830517 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0730 02:26:34.830541 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0730 02:26:34.830622 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.836035 1598980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 02:26:34.841762 1598980 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0730 02:26:34.842070 1598980 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0730 02:26:34.842083 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0730 02:26:34.842147 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.845813 1598980 out.go:177]   - Using image docker.io/busybox:stable
	I0730 02:26:34.850831 1598980 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0730 02:26:34.850855 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0730 02:26:34.850920 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:34.883633 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.910698 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.914336 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.914346 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.921978 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.949899 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.960543 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.985945 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:34.986848 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:35.019652 1598980 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0730 02:26:35.019687 1598980 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0730 02:26:35.021681 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:35.028076 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	W0730 02:26:35.032165 1598980 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0730 02:26:35.032201 1598980 retry.go:31] will retry after 319.000514ms: ssh: handshake failed: EOF
	I0730 02:26:35.135253 1598980 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0730 02:26:35.135276 1598980 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0730 02:26:35.174090 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0730 02:26:35.174166 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0730 02:26:35.265055 1598980 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0730 02:26:35.265074 1598980 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0730 02:26:35.288844 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0730 02:26:35.288863 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0730 02:26:35.314589 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0730 02:26:35.344768 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0730 02:26:35.364894 1598980 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0730 02:26:35.364969 1598980 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0730 02:26:35.410834 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0730 02:26:35.414097 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0730 02:26:35.422992 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0730 02:26:35.423064 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0730 02:26:35.481727 1598980 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0730 02:26:35.481811 1598980 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0730 02:26:35.498615 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0730 02:26:35.504580 1598980 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0730 02:26:35.504650 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0730 02:26:35.523763 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0730 02:26:35.523839 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0730 02:26:35.524689 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0730 02:26:35.529147 1598980 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0730 02:26:35.529210 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0730 02:26:35.571801 1598980 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0730 02:26:35.571878 1598980 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0730 02:26:35.590150 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0730 02:26:35.590229 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0730 02:26:35.626343 1598980 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0730 02:26:35.626419 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0730 02:26:35.654222 1598980 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0730 02:26:35.654297 1598980 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0730 02:26:35.716490 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0730 02:26:35.719689 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0730 02:26:35.719760 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0730 02:26:35.722600 1598980 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0730 02:26:35.722663 1598980 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0730 02:26:35.793234 1598980 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0730 02:26:35.793310 1598980 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0730 02:26:35.798389 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0730 02:26:35.817023 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0730 02:26:35.817098 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0730 02:26:35.857539 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0730 02:26:35.857619 1598980 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0730 02:26:35.869584 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0730 02:26:35.903706 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0730 02:26:35.903781 1598980 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0730 02:26:35.971403 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0730 02:26:35.991836 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0730 02:26:35.991913 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0730 02:26:36.041892 1598980 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0730 02:26:36.041965 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0730 02:26:36.047485 1598980 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 02:26:36.047568 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0730 02:26:36.122753 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0730 02:26:36.122834 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0730 02:26:36.164656 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0730 02:26:36.174671 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 02:26:36.233342 1598980 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0730 02:26:36.233429 1598980 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0730 02:26:36.334649 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0730 02:26:36.334725 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0730 02:26:36.418674 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0730 02:26:36.418746 1598980 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0730 02:26:36.505420 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0730 02:26:36.505493 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0730 02:26:36.576870 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0730 02:26:36.576958 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0730 02:26:36.761236 1598980 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0730 02:26:36.761311 1598980 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0730 02:26:36.951448 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0730 02:26:37.305865 1598980 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.588702064s)
	I0730 02:26:37.305958 1598980 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0730 02:26:37.306269 1598980 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.476012543s)
	I0730 02:26:37.308092 1598980 node_ready.go:35] waiting up to 6m0s for node "addons-261813" to be "Ready" ...
	I0730 02:26:38.021763 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.707063475s)
	I0730 02:26:38.276471 1598980 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-261813" context rescaled to 1 replicas
	I0730 02:26:38.701205 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.356317324s)
	I0730 02:26:39.351905 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:39.734875 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.323970155s)
	I0730 02:26:39.734942 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.320791008s)
	I0730 02:26:39.734969 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.236288917s)
	I0730 02:26:41.229310 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.704550314s)
	I0730 02:26:41.229346 1598980 addons.go:475] Verifying addon ingress=true in "addons-261813"
	I0730 02:26:41.229552 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.512995673s)
	I0730 02:26:41.229888 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.4314258s)
	I0730 02:26:41.229914 1598980 addons.go:475] Verifying addon registry=true in "addons-261813"
	I0730 02:26:41.230021 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.360367555s)
	I0730 02:26:41.230102 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.258629094s)
	I0730 02:26:41.230413 1598980 addons.go:475] Verifying addon metrics-server=true in "addons-261813"
	I0730 02:26:41.230157 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.06542772s)
	I0730 02:26:41.232392 1598980 out.go:177] * Verifying registry addon...
	I0730 02:26:41.232450 1598980 out.go:177] * Verifying ingress addon...
	I0730 02:26:41.232468 1598980 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-261813 service yakd-dashboard -n yakd-dashboard
	
	I0730 02:26:41.235929 1598980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0730 02:26:41.236848 1598980 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0730 02:26:41.254165 1598980 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0730 02:26:41.254197 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:41.254872 1598980 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0730 02:26:41.254886 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:41.361161 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.186396901s)
	W0730 02:26:41.361208 1598980 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0730 02:26:41.361229 1598980 retry.go:31] will retry after 148.504854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0730 02:26:41.510640 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0730 02:26:41.645411 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.69386487s)
	I0730 02:26:41.645450 1598980 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-261813"
	I0730 02:26:41.647620 1598980 out.go:177] * Verifying csi-hostpath-driver addon...
	I0730 02:26:41.650365 1598980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0730 02:26:41.662999 1598980 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0730 02:26:41.663026 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:41.742167 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:41.753318 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:41.814737 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:42.155157 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:42.243472 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:42.243927 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:42.654593 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:42.741119 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:42.742254 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:43.155065 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:43.241955 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:43.242852 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:43.657733 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:43.721297 1598980 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0730 02:26:43.721455 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:43.745898 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:43.746103 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:43.757789 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:43.912642 1598980 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0730 02:26:43.941070 1598980 addons.go:234] Setting addon gcp-auth=true in "addons-261813"
	I0730 02:26:43.941132 1598980 host.go:66] Checking if "addons-261813" exists ...
	I0730 02:26:43.941636 1598980 cli_runner.go:164] Run: docker container inspect addons-261813 --format={{.State.Status}}
	I0730 02:26:43.962219 1598980 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0730 02:26:43.962278 1598980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-261813
	I0730 02:26:44.002665 1598980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38883 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/addons-261813/id_rsa Username:docker}
	I0730 02:26:44.155506 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:44.246173 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:44.247218 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:44.312346 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:44.505638 1598980 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.99494461s)
	I0730 02:26:44.508954 1598980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0730 02:26:44.511656 1598980 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0730 02:26:44.514964 1598980 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0730 02:26:44.514988 1598980 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0730 02:26:44.559536 1598980 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0730 02:26:44.559559 1598980 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0730 02:26:44.585829 1598980 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0730 02:26:44.585857 1598980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0730 02:26:44.605992 1598980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0730 02:26:44.655298 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:44.742351 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:44.742997 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:45.171295 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:45.246429 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:45.249090 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:45.378343 1598980 addons.go:475] Verifying addon gcp-auth=true in "addons-261813"
	I0730 02:26:45.381252 1598980 out.go:177] * Verifying gcp-auth addon...
	I0730 02:26:45.384788 1598980 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0730 02:26:45.426843 1598980 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0730 02:26:45.426919 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:45.656025 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:45.740233 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:45.744010 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:45.888227 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:46.160697 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:46.240706 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:46.241467 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:46.316976 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:46.388922 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:46.655177 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:46.743217 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:46.746892 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:46.890153 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:47.155768 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:47.242156 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:47.243819 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:47.388320 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:47.655413 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:47.741289 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:47.742091 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:47.894664 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:48.155107 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:48.240387 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:48.244582 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:48.390561 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:48.659197 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:48.741514 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:48.742244 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:48.811221 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:48.893450 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:49.154818 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:49.240842 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:49.241761 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:49.388473 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:49.654931 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:49.742071 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:49.742148 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:49.892653 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:50.154665 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:50.239916 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:50.240533 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:50.395048 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:50.655210 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:50.740371 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:50.741455 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:50.812394 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:50.889627 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:51.154810 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:51.240694 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:51.243874 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:51.389132 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:51.654924 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:51.740902 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:51.741270 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:51.888836 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:52.154899 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:52.242144 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:52.248029 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:52.389261 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:52.655006 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:52.741291 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:52.741556 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:52.888902 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:53.155004 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:53.241854 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:53.245743 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:53.312154 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:53.388628 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:53.654530 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:53.741324 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:53.742037 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:53.888409 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:54.155224 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:54.241357 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:54.242091 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:54.388534 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:54.654254 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:54.740008 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:54.741655 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:54.888363 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:55.155061 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:55.240410 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:55.241134 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:55.388521 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:55.654530 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:55.741549 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:55.742133 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:55.811855 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:55.887983 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:56.154599 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:56.241195 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:56.241516 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:56.388790 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:56.655498 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:56.740863 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:56.742044 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:56.888669 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:57.154542 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:57.240944 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:57.241734 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:57.388509 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:57.655296 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:57.740691 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:57.741780 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:57.888437 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:58.154659 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:58.240472 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:58.241417 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:58.312202 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:26:58.388575 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:58.654842 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:58.740081 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:58.740860 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:58.888114 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:59.154979 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:59.241367 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:59.241577 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:59.388501 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:26:59.654757 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:26:59.741178 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:26:59.741736 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:26:59.888299 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:00.160281 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:00.241426 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:00.241932 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:00.312260 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:00.388669 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:00.654940 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:00.739531 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:00.741083 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:00.889266 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:01.154769 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:01.241018 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:01.241979 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:01.388296 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:01.655156 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:01.741343 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:01.742293 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:01.890266 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:02.154257 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:02.240131 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:02.241991 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:02.388413 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:02.655214 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:02.740098 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:02.741421 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:02.811809 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:02.888669 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:03.155400 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:03.240829 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:03.241180 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:03.387921 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:03.655018 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:03.741073 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:03.741687 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:03.888772 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:04.154686 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:04.241383 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:04.241645 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:04.388184 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:04.655331 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:04.741102 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:04.741432 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:04.888730 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:05.155110 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:05.240513 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:05.241526 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:05.311644 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:05.388795 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:05.654487 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:05.741698 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:05.742241 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:05.888671 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:06.154903 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:06.240003 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:06.242452 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:06.388811 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:06.654473 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:06.741736 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:06.742097 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:06.888462 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:07.155451 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:07.241836 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:07.245696 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:07.312063 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:07.388275 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:07.655678 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:07.742784 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:07.743855 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:07.888198 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:08.155445 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:08.249204 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:08.250185 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:08.388354 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:08.655101 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:08.740568 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:08.741241 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:08.888528 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:09.154744 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:09.242011 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:09.242248 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:09.388488 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:09.654631 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:09.740822 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:09.741412 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:09.811060 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:09.888793 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:10.155786 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:10.240930 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:10.242231 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:10.388536 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:10.654299 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:10.740750 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:10.741413 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:10.888238 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:11.155341 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:11.240221 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:11.242065 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:11.388104 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:11.654295 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:11.740483 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:11.744584 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:11.811478 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:11.888599 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:12.154414 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:12.242948 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:12.243864 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:12.389227 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:12.654350 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:12.740500 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:12.741056 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:12.888213 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:13.154757 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:13.240631 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:13.241046 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:13.388887 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:13.654748 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:13.741125 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:13.741982 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:13.888807 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:14.154717 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:14.240157 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:14.241451 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:14.311331 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:14.388392 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:14.654520 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:14.741042 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:14.741722 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:14.888625 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:15.154914 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:15.239721 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:15.241189 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:15.388410 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:15.654785 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:15.740239 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:15.740973 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:15.888129 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:16.155116 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:16.241810 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:16.242560 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:16.388187 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:16.655017 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:16.739612 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:16.741014 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:16.810753 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:16.887753 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:17.154629 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:17.240909 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:17.241477 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:17.387783 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:17.654924 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:17.739812 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:17.740511 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:17.888863 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:18.154827 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:18.240659 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:18.241469 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:18.387853 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:18.654449 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:18.739358 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:18.740795 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:18.812083 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:18.888880 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:19.155556 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:19.240708 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:19.241652 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:19.388799 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:19.654773 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:19.740147 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:19.741895 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:19.888881 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:20.155114 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:20.240490 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:20.240732 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:20.388624 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:20.654846 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:20.740778 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:20.741693 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:20.813524 1598980 node_ready.go:53] node "addons-261813" has status "Ready":"False"
	I0730 02:27:20.888286 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:21.155319 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:21.240375 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:21.241328 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:21.388316 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:21.660002 1598980 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0730 02:27:21.660031 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:21.766128 1598980 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0730 02:27:21.766162 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:21.766907 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:21.882741 1598980 node_ready.go:49] node "addons-261813" has status "Ready":"True"
	I0730 02:27:21.882766 1598980 node_ready.go:38] duration metric: took 44.574620375s for node "addons-261813" to be "Ready" ...
	I0730 02:27:21.882778 1598980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 02:27:21.933122 1598980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-l22tb" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:21.934634 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:22.161635 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:22.258272 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:22.271888 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:22.414602 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:22.656072 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:22.748943 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:22.757200 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:22.889287 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:23.157817 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:23.241940 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:23.246380 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:23.392013 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:23.443908 1598980 pod_ready.go:92] pod "coredns-7db6d8ff4d-l22tb" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.443989 1598980 pod_ready.go:81] duration metric: took 1.510831198s for pod "coredns-7db6d8ff4d-l22tb" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.444030 1598980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.457526 1598980 pod_ready.go:92] pod "etcd-addons-261813" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.457595 1598980 pod_ready.go:81] duration metric: took 13.53433ms for pod "etcd-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.457626 1598980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.464891 1598980 pod_ready.go:92] pod "kube-apiserver-addons-261813" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.464963 1598980 pod_ready.go:81] duration metric: took 7.316736ms for pod "kube-apiserver-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.464990 1598980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.472032 1598980 pod_ready.go:92] pod "kube-controller-manager-addons-261813" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.472097 1598980 pod_ready.go:81] duration metric: took 7.087014ms for pod "kube-controller-manager-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.472125 1598980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s88xb" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.477220 1598980 pod_ready.go:92] pod "kube-proxy-s88xb" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.477288 1598980 pod_ready.go:81] duration metric: took 5.142451ms for pod "kube-proxy-s88xb" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.477315 1598980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.661093 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:23.742737 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:23.744105 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:23.839325 1598980 pod_ready.go:92] pod "kube-scheduler-addons-261813" in "kube-system" namespace has status "Ready":"True"
	I0730 02:27:23.839352 1598980 pod_ready.go:81] duration metric: took 362.016377ms for pod "kube-scheduler-addons-261813" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.839365 1598980 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace to be "Ready" ...
	I0730 02:27:23.888306 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:24.156661 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:24.240401 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:24.243128 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:24.388528 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:24.656801 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:24.740315 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:24.742032 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:24.888847 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:25.157076 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:25.242386 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:25.244240 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:25.389176 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:25.656749 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:25.742045 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:25.744956 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:25.845989 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:25.889402 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:26.157092 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:26.244564 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:26.246632 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:26.389597 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:26.657783 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:26.745945 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:26.748693 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:26.889630 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:27.157559 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:27.244068 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:27.247920 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:27.389480 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:27.658429 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:27.745974 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:27.747399 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:27.846796 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:27.889568 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:28.157233 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:28.256164 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:28.258455 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:28.390020 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:28.656811 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:28.743524 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:28.744910 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:28.888904 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:29.157087 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:29.242574 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:29.245483 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:29.392135 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:29.658808 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:29.744380 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:29.748074 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:29.888784 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:30.156897 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:30.241788 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:30.244997 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:30.347187 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:30.388875 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:30.656369 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:30.742474 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:30.746184 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:30.889008 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:31.158484 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:31.242096 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:31.243646 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:31.388944 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:31.664838 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:31.741677 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:31.742921 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:31.890234 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:32.157100 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:32.242736 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:32.243914 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:32.390099 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:32.656816 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:32.745047 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:32.748330 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:32.847453 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:32.889014 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:33.155872 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:33.249118 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:33.252153 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:33.389975 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:33.657687 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:33.744596 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:33.745819 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:33.888944 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:34.156311 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:34.241262 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:34.242873 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:34.388925 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:34.655527 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:34.745476 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:34.747158 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:34.847656 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:34.888860 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:35.159849 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:35.242704 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:35.245407 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:35.392359 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:35.656767 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:35.743360 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:35.746254 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:35.888257 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:36.155499 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:36.246078 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:36.246866 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:36.388316 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:36.665716 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:36.744135 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:36.744735 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:36.889456 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:37.156637 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:37.245774 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:37.247625 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:37.345513 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:37.389041 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:37.656291 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:37.741618 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:37.744451 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:37.888783 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:38.156459 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:38.241586 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:38.242555 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:38.388898 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:38.657059 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:38.743653 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:38.744702 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:38.889734 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:39.157151 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:39.241523 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:39.244735 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:39.348058 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:39.388460 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:39.661541 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:39.744327 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:39.747834 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:39.889794 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:40.158452 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:40.243794 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:40.248221 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:40.388784 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:40.656871 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:40.742511 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:40.745659 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:40.890306 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:41.160405 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:41.244068 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:41.247734 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:41.391456 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:41.658267 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:41.748432 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:41.766233 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:41.862796 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:41.898048 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:42.159033 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:42.242678 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:42.243020 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:42.390212 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:42.656308 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:42.742220 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:42.745305 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:42.889195 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:43.168819 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:43.241549 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:43.242371 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:43.389816 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:43.661952 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:43.743730 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:43.744050 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:43.888823 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:44.156827 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:44.241281 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:44.244327 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:44.347542 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:44.389023 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:44.657772 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:44.745203 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:44.746974 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:44.889815 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:45.161226 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:45.244598 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:45.249847 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:45.389642 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:45.661166 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:45.744716 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:45.748208 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:45.891546 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:46.157943 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:46.242408 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:46.242613 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:46.388772 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:46.655777 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:46.743029 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:46.744016 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:46.845924 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:46.888379 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:47.156185 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:47.240576 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:47.241474 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:47.388673 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:47.656341 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:47.748962 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:47.752821 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:47.889281 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:48.157098 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:48.243699 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:48.250429 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:48.388994 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:48.656958 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:48.742557 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:48.743606 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:48.888717 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:49.155486 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:49.241582 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:49.242255 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:49.345697 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:49.388970 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:49.656004 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:49.741384 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:49.742323 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:49.888380 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:50.156088 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:50.241500 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:50.242979 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:50.388521 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:50.656398 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:50.743025 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:50.750395 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:50.889327 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:51.156416 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:51.254771 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:51.259707 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:51.346167 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:51.388814 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:51.657805 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:51.741215 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:51.743794 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:51.888434 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:52.156226 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:52.244465 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:52.244972 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:52.388411 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:52.659125 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:52.770657 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:52.771649 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:52.892810 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:53.156657 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:53.245170 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:53.250173 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:53.348569 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:53.388941 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:53.657678 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:53.748140 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:53.750962 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:53.890711 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:54.157326 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:54.246197 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:54.247667 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:54.389173 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:54.656676 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:54.742031 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:54.744161 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:54.889560 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:55.157241 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:55.249933 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:55.251041 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:55.349372 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:55.388316 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:55.660705 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:55.774711 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:55.776530 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:55.888971 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:56.156842 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:56.242266 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:56.244773 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:56.389431 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:56.657098 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:56.743897 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:56.745232 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:56.888697 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:57.155510 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:57.242839 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:57.243565 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:57.389765 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:57.656855 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:57.741205 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:57.742810 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:57.847388 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:27:57.888116 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:58.158054 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:58.244831 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:58.245781 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:58.389867 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:58.658079 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:58.745203 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:58.747930 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:58.889414 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:59.157531 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:59.243180 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:59.247283 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:59.388938 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:27:59.657475 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:27:59.742467 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:27:59.744679 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:27:59.888939 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:00.160616 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:00.252395 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:28:00.253587 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:00.350068 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:00.391766 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:00.656803 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:00.746173 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:28:00.748054 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:00.897731 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:01.155939 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:01.243245 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:28:01.243654 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:01.388817 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:01.656798 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:01.743857 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:01.746821 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:28:01.897646 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:02.159857 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:02.243250 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0730 02:28:02.245027 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:02.390013 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:02.656696 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:02.749352 1598980 kapi.go:107] duration metric: took 1m21.513418852s to wait for kubernetes.io/minikube-addons=registry ...
	I0730 02:28:02.749577 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:02.848142 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:02.888811 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:03.156526 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:03.241405 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:03.387949 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:03.662752 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:03.741678 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:03.889969 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:04.157830 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:04.242503 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:04.388493 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:04.656514 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:04.741941 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:04.888889 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:05.156290 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:05.241704 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:05.345461 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:05.389613 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:05.657202 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:05.741369 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:05.888949 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:06.159710 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:06.243196 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:06.389285 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:06.661793 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:06.741873 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:06.890459 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:07.163874 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:07.242189 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:07.346425 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:07.395111 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:07.655843 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:07.741661 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:07.888874 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:08.155693 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:08.241947 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:08.388806 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:08.656422 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:08.741579 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:08.888913 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:09.156854 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:09.242128 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:09.390678 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:09.656367 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:09.745398 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:09.845480 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:09.888843 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:10.156439 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:10.241864 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:10.390095 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:10.657490 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:10.742940 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:10.888251 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:11.157715 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:11.241264 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:11.388836 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:11.662074 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:11.741910 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:11.848216 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:11.889148 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:12.157277 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:12.241358 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:12.392404 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:12.655698 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:12.749165 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:12.888569 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:13.156148 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:13.240954 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:13.388823 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:13.656546 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:13.744404 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:13.889254 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:14.156381 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:14.241015 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:14.345361 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:14.388787 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:14.656225 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:14.741548 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:14.888029 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:15.157140 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:15.242703 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:15.389268 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:15.657173 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:15.744271 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:15.888749 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:16.160140 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:16.246500 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:16.349608 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:16.388401 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:16.656737 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:16.742955 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:16.889254 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:17.156494 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:17.241864 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:17.388780 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:17.655413 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:17.742107 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:17.894320 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:18.159392 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:18.241344 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:18.389267 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:18.655530 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:18.741712 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:18.846034 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:18.888836 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:19.156365 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:19.242264 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:19.388259 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:19.655737 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:19.743461 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:19.889183 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:20.157161 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:20.242092 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:20.392902 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:20.657194 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:20.744592 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:20.890310 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:21.157437 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:21.242078 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:21.345797 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:21.388434 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:21.659262 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:21.741773 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:21.888364 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:22.159038 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:22.241989 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:22.389646 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:22.658201 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:22.741795 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:22.889821 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:23.159110 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0730 02:28:23.247035 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:23.346518 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:23.390483 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:23.657400 1598980 kapi.go:107] duration metric: took 1m42.007032154s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0730 02:28:23.741708 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:23.888557 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:24.241662 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:24.389164 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:24.742178 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:24.888191 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:25.242072 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:25.388918 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:25.740957 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:25.846464 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:25.888798 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:26.241231 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:26.388552 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:26.741616 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:26.888140 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:27.241842 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:27.389637 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:27.741424 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:27.889524 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:28.243390 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:28.345547 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:28.388222 1598980 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0730 02:28:28.741166 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:28.888955 1598980 kapi.go:107] duration metric: took 1m43.504167481s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0730 02:28:28.890882 1598980 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-261813 cluster.
	I0730 02:28:28.893194 1598980 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0730 02:28:28.894649 1598980 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0730 02:28:29.241283 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:29.742470 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:30.244543 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:30.349465 1598980 pod_ready.go:102] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"False"
	I0730 02:28:30.742354 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:31.242964 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:31.742115 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:32.245438 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:32.345435 1598980 pod_ready.go:92] pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace has status "Ready":"True"
	I0730 02:28:32.345498 1598980 pod_ready.go:81] duration metric: took 1m8.506124762s for pod "metrics-server-c59844bb4-8rfsg" in "kube-system" namespace to be "Ready" ...
	I0730 02:28:32.345525 1598980 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zrzpl" in "kube-system" namespace to be "Ready" ...
	I0730 02:28:32.350238 1598980 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-zrzpl" in "kube-system" namespace has status "Ready":"True"
	I0730 02:28:32.350308 1598980 pod_ready.go:81] duration metric: took 4.760945ms for pod "nvidia-device-plugin-daemonset-zrzpl" in "kube-system" namespace to be "Ready" ...
	I0730 02:28:32.350344 1598980 pod_ready.go:38] duration metric: took 1m10.467552421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 02:28:32.352283 1598980 api_server.go:52] waiting for apiserver process to appear ...
	I0730 02:28:32.352932 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 02:28:32.353029 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 02:28:32.440050 1598980 cri.go:89] found id: "a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:32.440119 1598980 cri.go:89] found id: ""
	I0730 02:28:32.440140 1598980 logs.go:276] 1 containers: [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8]
	I0730 02:28:32.440230 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.446273 1598980 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 02:28:32.446398 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 02:28:32.524497 1598980 cri.go:89] found id: "ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:32.524564 1598980 cri.go:89] found id: ""
	I0730 02:28:32.524585 1598980 logs.go:276] 1 containers: [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3]
	I0730 02:28:32.524692 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.535887 1598980 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 02:28:32.536058 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 02:28:32.617548 1598980 cri.go:89] found id: "7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:32.617611 1598980 cri.go:89] found id: ""
	I0730 02:28:32.617640 1598980 logs.go:276] 1 containers: [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617]
	I0730 02:28:32.617717 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.623355 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 02:28:32.623480 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 02:28:32.699527 1598980 cri.go:89] found id: "108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:32.699589 1598980 cri.go:89] found id: ""
	I0730 02:28:32.699613 1598980 logs.go:276] 1 containers: [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa]
	I0730 02:28:32.699690 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.703390 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 02:28:32.703519 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 02:28:32.741953 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:32.772834 1598980 cri.go:89] found id: "93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:32.772892 1598980 cri.go:89] found id: ""
	I0730 02:28:32.772914 1598980 logs.go:276] 1 containers: [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5]
	I0730 02:28:32.772988 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.776467 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 02:28:32.776580 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 02:28:32.825788 1598980 cri.go:89] found id: "35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:32.825861 1598980 cri.go:89] found id: ""
	I0730 02:28:32.825885 1598980 logs.go:276] 1 containers: [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c]
	I0730 02:28:32.825960 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.830055 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 02:28:32.830176 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 02:28:32.898471 1598980 cri.go:89] found id: "cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:32.898543 1598980 cri.go:89] found id: ""
	I0730 02:28:32.898566 1598980 logs.go:276] 1 containers: [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a]
	I0730 02:28:32.898649 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:32.903826 1598980 logs.go:123] Gathering logs for container status ...
	I0730 02:28:32.903896 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 02:28:32.974847 1598980 logs.go:123] Gathering logs for CRI-O ...
	I0730 02:28:32.974919 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 02:28:33.104094 1598980 logs.go:123] Gathering logs for dmesg ...
	I0730 02:28:33.104177 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 02:28:33.132615 1598980 logs.go:123] Gathering logs for describe nodes ...
	I0730 02:28:33.132694 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 02:28:33.242663 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:33.401070 1598980 logs.go:123] Gathering logs for kube-apiserver [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8] ...
	I0730 02:28:33.401106 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:33.476893 1598980 logs.go:123] Gathering logs for etcd [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3] ...
	I0730 02:28:33.476972 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:33.528911 1598980 logs.go:123] Gathering logs for coredns [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617] ...
	I0730 02:28:33.528947 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:33.600134 1598980 logs.go:123] Gathering logs for kube-scheduler [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa] ...
	I0730 02:28:33.600224 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:33.657735 1598980 logs.go:123] Gathering logs for kube-proxy [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5] ...
	I0730 02:28:33.657820 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:33.733299 1598980 logs.go:123] Gathering logs for kubelet ...
	I0730 02:28:33.733376 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 02:28:33.743799 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0730 02:28:33.768274 1598980 logs.go:138] Found kubelet problem: Jul 30 02:26:37 addons-261813 kubelet[1543]: E0730 02:26:37.474093    1543 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8bf4d64c-18bf-44e9-8f58-95218dce63f2-kube-api-access-t2wmz podName:8bf4d64c-18bf-44e9-8f58-95218dce63f2 nodeName:}" failed. No retries permitted until 2024-07-30 02:26:37.974064608 +0000 UTC m=+17.468428211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t2wmz" (UniqueName: "kubernetes.io/projected/8bf4d64c-18bf-44e9-8f58-95218dce63f2-kube-api-access-t2wmz") pod "kindnet-2j67p" (UID: "8bf4d64c-18bf-44e9-8f58-95218dce63f2") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-261813" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-261813' and this object, failed to sync configmap cache: timed out waiting for the con
dition]
	W0730 02:28:33.769740 1598980 logs.go:138] Found kubelet problem: Jul 30 02:26:40 addons-261813 kubelet[1543]: W0730 02:26:40.147742    1543 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:33.769997 1598980 logs.go:138] Found kubelet problem: Jul 30 02:26:40 addons-261813 kubelet[1543]: E0730 02:26:40.147796    1543 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:33.803596 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:33.803878 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:33.838753 1598980 logs.go:123] Gathering logs for kindnet [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a] ...
	I0730 02:28:33.838824 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:33.908406 1598980 logs.go:123] Gathering logs for kube-controller-manager [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c] ...
	I0730 02:28:33.908482 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:34.007001 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:34.007060 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0730 02:28:34.007159 1598980 out.go:239] X Problems detected in kubelet:
	W0730 02:28:34.007176 1598980 out.go:239]   Jul 30 02:26:37 addons-261813 kubelet[1543]: E0730 02:26:37.474093    1543 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8bf4d64c-18bf-44e9-8f58-95218dce63f2-kube-api-access-t2wmz podName:8bf4d64c-18bf-44e9-8f58-95218dce63f2 nodeName:}" failed. No retries permitted until 2024-07-30 02:26:37.974064608 +0000 UTC m=+17.468428211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t2wmz" (UniqueName: "kubernetes.io/projected/8bf4d64c-18bf-44e9-8f58-95218dce63f2-kube-api-access-t2wmz") pod "kindnet-2j67p" (UID: "8bf4d64c-18bf-44e9-8f58-95218dce63f2") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-261813" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-261813' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0730 02:28:34.007186 1598980 out.go:239]   Jul 30 02:26:40 addons-261813 kubelet[1543]: W0730 02:26:40.147742    1543 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:34.007194 1598980 out.go:239]   Jul 30 02:26:40 addons-261813 kubelet[1543]: E0730 02:26:40.147796    1543 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:34.007202 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:34.007217 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:34.007229 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:34.007235 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:28:34.242025 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:34.746596 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:35.241901 1598980 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0730 02:28:35.742270 1598980 kapi.go:107] duration metric: took 1m54.505417431s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0730 02:28:35.745445 1598980 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0730 02:28:35.747196 1598980 addons.go:510] duration metric: took 2m1.381541103s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns default-storageclass metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0730 02:28:44.008890 1598980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 02:28:44.024139 1598980 api_server.go:72] duration metric: took 2m9.658727483s to wait for apiserver process to appear ...
	I0730 02:28:44.024169 1598980 api_server.go:88] waiting for apiserver healthz status ...
	I0730 02:28:44.024203 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 02:28:44.024267 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 02:28:44.063492 1598980 cri.go:89] found id: "a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:44.063516 1598980 cri.go:89] found id: ""
	I0730 02:28:44.063524 1598980 logs.go:276] 1 containers: [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8]
	I0730 02:28:44.063582 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.067177 1598980 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 02:28:44.067253 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 02:28:44.103992 1598980 cri.go:89] found id: "ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:44.104018 1598980 cri.go:89] found id: ""
	I0730 02:28:44.104027 1598980 logs.go:276] 1 containers: [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3]
	I0730 02:28:44.104081 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.107723 1598980 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 02:28:44.107799 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 02:28:44.148952 1598980 cri.go:89] found id: "7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:44.148974 1598980 cri.go:89] found id: ""
	I0730 02:28:44.148981 1598980 logs.go:276] 1 containers: [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617]
	I0730 02:28:44.149040 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.152506 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 02:28:44.152582 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 02:28:44.193479 1598980 cri.go:89] found id: "108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:44.193504 1598980 cri.go:89] found id: ""
	I0730 02:28:44.193512 1598980 logs.go:276] 1 containers: [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa]
	I0730 02:28:44.193573 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.197405 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 02:28:44.197482 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 02:28:44.235248 1598980 cri.go:89] found id: "93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:44.235272 1598980 cri.go:89] found id: ""
	I0730 02:28:44.235281 1598980 logs.go:276] 1 containers: [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5]
	I0730 02:28:44.235338 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.238957 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 02:28:44.239040 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 02:28:44.278132 1598980 cri.go:89] found id: "35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:44.278156 1598980 cri.go:89] found id: ""
	I0730 02:28:44.278165 1598980 logs.go:276] 1 containers: [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c]
	I0730 02:28:44.278263 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.281919 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 02:28:44.281999 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 02:28:44.326100 1598980 cri.go:89] found id: "cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:44.326123 1598980 cri.go:89] found id: ""
	I0730 02:28:44.326132 1598980 logs.go:276] 1 containers: [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a]
	I0730 02:28:44.326188 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:44.329752 1598980 logs.go:123] Gathering logs for kube-proxy [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5] ...
	I0730 02:28:44.329778 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:44.368816 1598980 logs.go:123] Gathering logs for kindnet [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a] ...
	I0730 02:28:44.368845 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:44.421705 1598980 logs.go:123] Gathering logs for container status ...
	I0730 02:28:44.421735 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 02:28:44.474570 1598980 logs.go:123] Gathering logs for dmesg ...
	I0730 02:28:44.474603 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 02:28:44.493820 1598980 logs.go:123] Gathering logs for describe nodes ...
	I0730 02:28:44.493862 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 02:28:44.624578 1598980 logs.go:123] Gathering logs for kube-apiserver [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8] ...
	I0730 02:28:44.624610 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:44.679610 1598980 logs.go:123] Gathering logs for etcd [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3] ...
	I0730 02:28:44.679645 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:44.729466 1598980 logs.go:123] Gathering logs for coredns [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617] ...
	I0730 02:28:44.729503 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:44.770825 1598980 logs.go:123] Gathering logs for kubelet ...
	I0730 02:28:44.770862 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0730 02:28:44.792188 1598980 logs.go:138] Found kubelet problem: Jul 30 02:26:40 addons-261813 kubelet[1543]: W0730 02:26:40.147742    1543 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:44.792446 1598980 logs.go:138] Found kubelet problem: Jul 30 02:26:40 addons-261813 kubelet[1543]: E0730 02:26:40.147796    1543 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:44.820351 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:44.820574 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:44.857092 1598980 logs.go:123] Gathering logs for kube-scheduler [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa] ...
	I0730 02:28:44.857138 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:44.900209 1598980 logs.go:123] Gathering logs for kube-controller-manager [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c] ...
	I0730 02:28:44.900243 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:44.970007 1598980 logs.go:123] Gathering logs for CRI-O ...
	I0730 02:28:44.970044 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 02:28:45.078748 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:45.078839 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0730 02:28:45.078951 1598980 out.go:239] X Problems detected in kubelet:
	W0730 02:28:45.079246 1598980 out.go:239]   Jul 30 02:26:40 addons-261813 kubelet[1543]: W0730 02:26:40.147742    1543 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:45.079316 1598980 out.go:239]   Jul 30 02:26:40 addons-261813 kubelet[1543]: E0730 02:26:40.147796    1543 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-261813' and this object
	W0730 02:28:45.079359 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:45.079428 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:45.079447 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:45.079486 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:28:55.081169 1598980 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:28:55.091451 1598980 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0730 02:28:55.092545 1598980 api_server.go:141] control plane version: v1.30.3
	I0730 02:28:55.092574 1598980 api_server.go:131] duration metric: took 11.068397148s to wait for apiserver health ...
	I0730 02:28:55.092583 1598980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 02:28:55.092604 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 02:28:55.092664 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 02:28:55.137090 1598980 cri.go:89] found id: "a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:55.137112 1598980 cri.go:89] found id: ""
	I0730 02:28:55.137120 1598980 logs.go:276] 1 containers: [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8]
	I0730 02:28:55.137180 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.140857 1598980 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 02:28:55.140929 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 02:28:55.184095 1598980 cri.go:89] found id: "ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:55.184119 1598980 cri.go:89] found id: ""
	I0730 02:28:55.184128 1598980 logs.go:276] 1 containers: [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3]
	I0730 02:28:55.184188 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.187573 1598980 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 02:28:55.187639 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 02:28:55.228853 1598980 cri.go:89] found id: "7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:55.228876 1598980 cri.go:89] found id: ""
	I0730 02:28:55.228883 1598980 logs.go:276] 1 containers: [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617]
	I0730 02:28:55.228937 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.232936 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 02:28:55.233007 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 02:28:55.290768 1598980 cri.go:89] found id: "108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:55.290792 1598980 cri.go:89] found id: ""
	I0730 02:28:55.290800 1598980 logs.go:276] 1 containers: [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa]
	I0730 02:28:55.290857 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.294565 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 02:28:55.294672 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 02:28:55.339009 1598980 cri.go:89] found id: "93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:55.339090 1598980 cri.go:89] found id: ""
	I0730 02:28:55.339113 1598980 logs.go:276] 1 containers: [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5]
	I0730 02:28:55.339185 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.342795 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 02:28:55.342901 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 02:28:55.381815 1598980 cri.go:89] found id: "35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:55.381835 1598980 cri.go:89] found id: ""
	I0730 02:28:55.381843 1598980 logs.go:276] 1 containers: [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c]
	I0730 02:28:55.381917 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.385651 1598980 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 02:28:55.385737 1598980 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 02:28:55.429393 1598980 cri.go:89] found id: "cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:55.429466 1598980 cri.go:89] found id: ""
	I0730 02:28:55.429487 1598980 logs.go:276] 1 containers: [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a]
	I0730 02:28:55.429577 1598980 ssh_runner.go:195] Run: which crictl
	I0730 02:28:55.433910 1598980 logs.go:123] Gathering logs for coredns [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617] ...
	I0730 02:28:55.433943 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617"
	I0730 02:28:55.481178 1598980 logs.go:123] Gathering logs for kube-scheduler [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa] ...
	I0730 02:28:55.481208 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa"
	I0730 02:28:55.528018 1598980 logs.go:123] Gathering logs for CRI-O ...
	I0730 02:28:55.528050 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 02:28:55.628171 1598980 logs.go:123] Gathering logs for container status ...
	I0730 02:28:55.628210 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 02:28:55.676340 1598980 logs.go:123] Gathering logs for kube-apiserver [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8] ...
	I0730 02:28:55.676385 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8"
	I0730 02:28:55.750498 1598980 logs.go:123] Gathering logs for dmesg ...
	I0730 02:28:55.750530 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 02:28:55.769769 1598980 logs.go:123] Gathering logs for describe nodes ...
	I0730 02:28:55.769852 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 02:28:55.904026 1598980 logs.go:123] Gathering logs for etcd [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3] ...
	I0730 02:28:55.904056 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3"
	I0730 02:28:55.964544 1598980 logs.go:123] Gathering logs for kube-proxy [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5] ...
	I0730 02:28:55.964650 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5"
	I0730 02:28:56.000485 1598980 logs.go:123] Gathering logs for kube-controller-manager [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c] ...
	I0730 02:28:56.000523 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c"
	I0730 02:28:56.074408 1598980 logs.go:123] Gathering logs for kindnet [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a] ...
	I0730 02:28:56.074447 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a"
	I0730 02:28:56.126605 1598980 logs.go:123] Gathering logs for kubelet ...
	I0730 02:28:56.126638 1598980 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0730 02:28:56.173577 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:56.173837 1598980 logs.go:138] Found kubelet problem: Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:56.210264 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:56.210295 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0730 02:28:56.210354 1598980 out.go:239] X Problems detected in kubelet:
	W0730 02:28:56.210367 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: W0730 02:27:21.546664    1543 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	W0730 02:28:56.210375 1598980 out.go:239]   Jul 30 02:27:21 addons-261813 kubelet[1543]: E0730 02:27:21.546708    1543 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-261813" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-261813' and this object
	I0730 02:28:56.210389 1598980 out.go:304] Setting ErrFile to fd 2...
	I0730 02:28:56.210396 1598980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:29:06.220639 1598980 system_pods.go:59] 18 kube-system pods found
	I0730 02:29:06.220696 1598980 system_pods.go:61] "coredns-7db6d8ff4d-l22tb" [a21b727b-d2cf-4251-8850-9f55d4483afa] Running
	I0730 02:29:06.220710 1598980 system_pods.go:61] "csi-hostpath-attacher-0" [0c79c5cf-ac99-42f9-be4b-3d1d454d90f5] Running
	I0730 02:29:06.220715 1598980 system_pods.go:61] "csi-hostpath-resizer-0" [9e216baa-8fce-4cbd-b955-95512c092fe4] Running
	I0730 02:29:06.220720 1598980 system_pods.go:61] "csi-hostpathplugin-d8vp2" [38eae3e8-34a7-49ac-94d8-1c7fe18609b6] Running
	I0730 02:29:06.220725 1598980 system_pods.go:61] "etcd-addons-261813" [0a75d41c-1d52-41ee-b68b-4032433f51e7] Running
	I0730 02:29:06.220730 1598980 system_pods.go:61] "kindnet-2j67p" [8bf4d64c-18bf-44e9-8f58-95218dce63f2] Running
	I0730 02:29:06.220735 1598980 system_pods.go:61] "kube-apiserver-addons-261813" [c9db107c-71f7-45c7-864d-1c7f1cc5f826] Running
	I0730 02:29:06.220739 1598980 system_pods.go:61] "kube-controller-manager-addons-261813" [293a3bf9-5b9f-47f5-b518-a1e2374f11f1] Running
	I0730 02:29:06.220744 1598980 system_pods.go:61] "kube-ingress-dns-minikube" [0cca283f-f80d-4219-a735-ce5eb75135f4] Running
	I0730 02:29:06.220748 1598980 system_pods.go:61] "kube-proxy-s88xb" [9ef700dc-4b56-4fbd-82bf-b9e75360235b] Running
	I0730 02:29:06.220752 1598980 system_pods.go:61] "kube-scheduler-addons-261813" [22c54723-d213-4ea3-b23c-45042048293e] Running
	I0730 02:29:06.220759 1598980 system_pods.go:61] "metrics-server-c59844bb4-8rfsg" [fac509e3-535c-40c1-ad6c-61226795aa5e] Running
	I0730 02:29:06.220763 1598980 system_pods.go:61] "nvidia-device-plugin-daemonset-zrzpl" [73510050-2ea7-49cd-bf93-d1b56047d84f] Running
	I0730 02:29:06.220767 1598980 system_pods.go:61] "registry-698f998955-hmxpq" [831bfd95-6ae5-4eae-883c-71619d8c8922] Running
	I0730 02:29:06.220770 1598980 system_pods.go:61] "registry-proxy-c2b4j" [8c7f77e7-adf9-4a3f-8a9a-0e7e917e1a2f] Running
	I0730 02:29:06.220775 1598980 system_pods.go:61] "snapshot-controller-745499f584-47q8z" [370da19b-339e-4fdf-a88d-dced8fc43691] Running
	I0730 02:29:06.220779 1598980 system_pods.go:61] "snapshot-controller-745499f584-lxhf2" [6796fbfb-03e1-4e7b-ac36-f89275c1dc6c] Running
	I0730 02:29:06.220787 1598980 system_pods.go:61] "storage-provisioner" [f5c1a2d3-2530-4dbb-843a-cce7e3bc6767] Running
	I0730 02:29:06.220794 1598980 system_pods.go:74] duration metric: took 11.128204865s to wait for pod list to return data ...
	I0730 02:29:06.220807 1598980 default_sa.go:34] waiting for default service account to be created ...
	I0730 02:29:06.223327 1598980 default_sa.go:45] found service account: "default"
	I0730 02:29:06.223356 1598980 default_sa.go:55] duration metric: took 2.54213ms for default service account to be created ...
	I0730 02:29:06.223380 1598980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0730 02:29:06.232601 1598980 system_pods.go:86] 18 kube-system pods found
	I0730 02:29:06.232637 1598980 system_pods.go:89] "coredns-7db6d8ff4d-l22tb" [a21b727b-d2cf-4251-8850-9f55d4483afa] Running
	I0730 02:29:06.232644 1598980 system_pods.go:89] "csi-hostpath-attacher-0" [0c79c5cf-ac99-42f9-be4b-3d1d454d90f5] Running
	I0730 02:29:06.232649 1598980 system_pods.go:89] "csi-hostpath-resizer-0" [9e216baa-8fce-4cbd-b955-95512c092fe4] Running
	I0730 02:29:06.232653 1598980 system_pods.go:89] "csi-hostpathplugin-d8vp2" [38eae3e8-34a7-49ac-94d8-1c7fe18609b6] Running
	I0730 02:29:06.232683 1598980 system_pods.go:89] "etcd-addons-261813" [0a75d41c-1d52-41ee-b68b-4032433f51e7] Running
	I0730 02:29:06.232689 1598980 system_pods.go:89] "kindnet-2j67p" [8bf4d64c-18bf-44e9-8f58-95218dce63f2] Running
	I0730 02:29:06.232693 1598980 system_pods.go:89] "kube-apiserver-addons-261813" [c9db107c-71f7-45c7-864d-1c7f1cc5f826] Running
	I0730 02:29:06.232708 1598980 system_pods.go:89] "kube-controller-manager-addons-261813" [293a3bf9-5b9f-47f5-b518-a1e2374f11f1] Running
	I0730 02:29:06.232714 1598980 system_pods.go:89] "kube-ingress-dns-minikube" [0cca283f-f80d-4219-a735-ce5eb75135f4] Running
	I0730 02:29:06.232721 1598980 system_pods.go:89] "kube-proxy-s88xb" [9ef700dc-4b56-4fbd-82bf-b9e75360235b] Running
	I0730 02:29:06.232725 1598980 system_pods.go:89] "kube-scheduler-addons-261813" [22c54723-d213-4ea3-b23c-45042048293e] Running
	I0730 02:29:06.232733 1598980 system_pods.go:89] "metrics-server-c59844bb4-8rfsg" [fac509e3-535c-40c1-ad6c-61226795aa5e] Running
	I0730 02:29:06.232737 1598980 system_pods.go:89] "nvidia-device-plugin-daemonset-zrzpl" [73510050-2ea7-49cd-bf93-d1b56047d84f] Running
	I0730 02:29:06.232764 1598980 system_pods.go:89] "registry-698f998955-hmxpq" [831bfd95-6ae5-4eae-883c-71619d8c8922] Running
	I0730 02:29:06.232775 1598980 system_pods.go:89] "registry-proxy-c2b4j" [8c7f77e7-adf9-4a3f-8a9a-0e7e917e1a2f] Running
	I0730 02:29:06.232779 1598980 system_pods.go:89] "snapshot-controller-745499f584-47q8z" [370da19b-339e-4fdf-a88d-dced8fc43691] Running
	I0730 02:29:06.232784 1598980 system_pods.go:89] "snapshot-controller-745499f584-lxhf2" [6796fbfb-03e1-4e7b-ac36-f89275c1dc6c] Running
	I0730 02:29:06.232791 1598980 system_pods.go:89] "storage-provisioner" [f5c1a2d3-2530-4dbb-843a-cce7e3bc6767] Running
	I0730 02:29:06.232798 1598980 system_pods.go:126] duration metric: took 9.411713ms to wait for k8s-apps to be running ...
	I0730 02:29:06.232811 1598980 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 02:29:06.232880 1598980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 02:29:06.244932 1598980 system_svc.go:56] duration metric: took 12.105717ms WaitForService to wait for kubelet
	I0730 02:29:06.244963 1598980 kubeadm.go:582] duration metric: took 2m31.879557355s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 02:29:06.244984 1598980 node_conditions.go:102] verifying NodePressure condition ...
	I0730 02:29:06.248749 1598980 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0730 02:29:06.248784 1598980 node_conditions.go:123] node cpu capacity is 2
	I0730 02:29:06.248797 1598980 node_conditions.go:105] duration metric: took 3.807112ms to run NodePressure ...
	I0730 02:29:06.248811 1598980 start.go:241] waiting for startup goroutines ...
	I0730 02:29:06.248819 1598980 start.go:246] waiting for cluster config update ...
	I0730 02:29:06.248845 1598980 start.go:255] writing updated cluster config ...
	I0730 02:29:06.249142 1598980 ssh_runner.go:195] Run: rm -f paused
	I0730 02:29:06.603247 1598980 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0730 02:29:06.605926 1598980 out.go:177] * Done! kubectl is now configured to use "addons-261813" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.201960547Z" level=info msg="Removed container 07fb52f251a87201ec36e9a2d5e5507a04ae6a2ee3fec7c8d055b7f838d864fe: ingress-nginx/ingress-nginx-admission-patch-pdbrb/patch" id=105d394c-04cd-4084-8030-f2067fe72b94 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.203514646Z" level=info msg="Removing container: 018f5cf2354316091bd4aa112efffb8df4c9c20a1b4e407df03268cba691ac33" id=7e9767de-f90f-49da-8811-091797932c4e name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.221860983Z" level=info msg="Removed container 018f5cf2354316091bd4aa112efffb8df4c9c20a1b4e407df03268cba691ac33: ingress-nginx/ingress-nginx-admission-create-hwwbc/create" id=7e9767de-f90f-49da-8811-091797932c4e name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.223292082Z" level=info msg="Stopping pod sandbox: e126db030f8a1e24e22482a47732e63ceb0830a159b022a9464f9af5dc62f5b4" id=459f2960-6be9-403d-ac0b-566fa5033764 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.223340064Z" level=info msg="Stopped pod sandbox (already stopped): e126db030f8a1e24e22482a47732e63ceb0830a159b022a9464f9af5dc62f5b4" id=459f2960-6be9-403d-ac0b-566fa5033764 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.223683441Z" level=info msg="Removing pod sandbox: e126db030f8a1e24e22482a47732e63ceb0830a159b022a9464f9af5dc62f5b4" id=73f50bde-25f3-4779-ac2f-c58f4d928455 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.232775471Z" level=info msg="Removed pod sandbox: e126db030f8a1e24e22482a47732e63ceb0830a159b022a9464f9af5dc62f5b4" id=73f50bde-25f3-4779-ac2f-c58f4d928455 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.233396709Z" level=info msg="Stopping pod sandbox: aa09724c47ca2e22eabd27ceae0362ffce15c0a9914a9a581717662acc9f33bb" id=7fdff455-4f5d-479f-81cc-44a577d04000 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.233436363Z" level=info msg="Stopped pod sandbox (already stopped): aa09724c47ca2e22eabd27ceae0362ffce15c0a9914a9a581717662acc9f33bb" id=7fdff455-4f5d-479f-81cc-44a577d04000 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.233702876Z" level=info msg="Removing pod sandbox: aa09724c47ca2e22eabd27ceae0362ffce15c0a9914a9a581717662acc9f33bb" id=5eb37390-a92b-40be-b844-de6d53797593 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.241181847Z" level=info msg="Removed pod sandbox: aa09724c47ca2e22eabd27ceae0362ffce15c0a9914a9a581717662acc9f33bb" id=5eb37390-a92b-40be-b844-de6d53797593 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.241673479Z" level=info msg="Stopping pod sandbox: 9d426a5dad7edfdea04270d6cb67fe41469aceceb252b60ba20cb6100d9ab71f" id=7ed2879a-13d7-416a-a613-8fcc2d566c64 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.241713963Z" level=info msg="Stopped pod sandbox (already stopped): 9d426a5dad7edfdea04270d6cb67fe41469aceceb252b60ba20cb6100d9ab71f" id=7ed2879a-13d7-416a-a613-8fcc2d566c64 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.241982716Z" level=info msg="Removing pod sandbox: 9d426a5dad7edfdea04270d6cb67fe41469aceceb252b60ba20cb6100d9ab71f" id=67ffc5d6-e9b4-4932-b87f-f7942833ea70 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.250800446Z" level=info msg="Removed pod sandbox: 9d426a5dad7edfdea04270d6cb67fe41469aceceb252b60ba20cb6100d9ab71f" id=67ffc5d6-e9b4-4932-b87f-f7942833ea70 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.251267784Z" level=info msg="Stopping pod sandbox: f36e8e0dd726d6c6d88b110905ba21e2691231a72334c47cc27d6d551b14ad33" id=bfaa2edf-c07b-4f7b-8914-56cb60d4efee name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.251307389Z" level=info msg="Stopped pod sandbox (already stopped): f36e8e0dd726d6c6d88b110905ba21e2691231a72334c47cc27d6d551b14ad33" id=bfaa2edf-c07b-4f7b-8914-56cb60d4efee name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.251596941Z" level=info msg="Removing pod sandbox: f36e8e0dd726d6c6d88b110905ba21e2691231a72334c47cc27d6d551b14ad33" id=ca1c9ae8-ab0f-4dfe-b9df-680da065fe00 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 30 02:34:21 addons-261813 crio[961]: time="2024-07-30 02:34:21.260378718Z" level=info msg="Removed pod sandbox: f36e8e0dd726d6c6d88b110905ba21e2691231a72334c47cc27d6d551b14ad33" id=ca1c9ae8-ab0f-4dfe-b9df-680da065fe00 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 30 02:35:58 addons-261813 crio[961]: time="2024-07-30 02:35:58.312205026Z" level=info msg="Stopping container: 78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c (timeout: 30s)" id=60e7d1a1-5908-4295-8d16-982cd650bda3 name=/runtime.v1.RuntimeService/StopContainer
	Jul 30 02:35:59 addons-261813 crio[961]: time="2024-07-30 02:35:59.492408848Z" level=info msg="Stopped container 78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c: kube-system/metrics-server-c59844bb4-8rfsg/metrics-server" id=60e7d1a1-5908-4295-8d16-982cd650bda3 name=/runtime.v1.RuntimeService/StopContainer
	Jul 30 02:35:59 addons-261813 crio[961]: time="2024-07-30 02:35:59.492970001Z" level=info msg="Stopping pod sandbox: 42b8f4cfa00174601fc0c54ab910348574f223eebc127dc67d35968245b26e72" id=86b18418-656e-444e-b30f-ce9c196a21a2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 30 02:35:59 addons-261813 crio[961]: time="2024-07-30 02:35:59.493208994Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-8rfsg Namespace:kube-system ID:42b8f4cfa00174601fc0c54ab910348574f223eebc127dc67d35968245b26e72 UID:fac509e3-535c-40c1-ad6c-61226795aa5e NetNS:/var/run/netns/68bf6a3f-81f7-48b1-8f3d-d009d7682cbb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 30 02:35:59 addons-261813 crio[961]: time="2024-07-30 02:35:59.493359637Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-8rfsg from CNI network \"kindnet\" (type=ptp)"
	Jul 30 02:35:59 addons-261813 crio[961]: time="2024-07-30 02:35:59.533546436Z" level=info msg="Stopped pod sandbox: 42b8f4cfa00174601fc0c54ab910348574f223eebc127dc67d35968245b26e72" id=86b18418-656e-444e-b30f-ce9c196a21a2 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8609bed30522f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   b7d893e597415       hello-world-app-6778b5fc9f-sfmbr
	d6401ab1fa013       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   371517bc5e27c       nginx
	79fac74c217dc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   a3bde3f0b15a1       busybox
	78ca04eb9146a       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   42b8f4cfa0017       metrics-server-c59844bb4-8rfsg
	a3dea84fe5c9b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   88c39fa123ada       storage-provisioner
	7e56240fee5c6       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   472ebf9e1b08d       coredns-7db6d8ff4d-l22tb
	cefca930d8e8f       docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a                      9 minutes ago       Running             kindnet-cni               0                   b873491f998b7       kindnet-2j67p
	93685ccfcfb0c       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                        9 minutes ago       Running             kube-proxy                0                   201ce970d35bd       kube-proxy-s88xb
	ff022a285ff31       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        9 minutes ago       Running             etcd                      0                   d09f676f7ec75       etcd-addons-261813
	35b9e367f7359       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                        9 minutes ago       Running             kube-controller-manager   0                   fd25fe34235e3       kube-controller-manager-addons-261813
	a4673af6f12b1       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                        9 minutes ago       Running             kube-apiserver            0                   147d1b763de49       kube-apiserver-addons-261813
	108e2658a310b       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                        9 minutes ago       Running             kube-scheduler            0                   a7e1e98c29e90       kube-scheduler-addons-261813
	
	
	==> coredns [7e56240fee5c663b5847536c051f4aba1b3a8eebc3f122cdb50459369c93e617] <==
	[INFO] 10.244.0.9:56813 - 57176 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002513856s
	[INFO] 10.244.0.9:47865 - 31297 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000701604s
	[INFO] 10.244.0.9:47865 - 31047 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00121576s
	[INFO] 10.244.0.9:37242 - 10809 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000091025s
	[INFO] 10.244.0.9:37242 - 17724 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000049607s
	[INFO] 10.244.0.9:37708 - 60325 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056228s
	[INFO] 10.244.0.9:37708 - 60071 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000144735s
	[INFO] 10.244.0.9:34626 - 44421 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042854s
	[INFO] 10.244.0.9:34626 - 52103 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.001161878s
	[INFO] 10.244.0.9:48047 - 7756 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001391124s
	[INFO] 10.244.0.9:48047 - 5962 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001429351s
	[INFO] 10.244.0.9:54690 - 61475 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071867s
	[INFO] 10.244.0.9:54690 - 47397 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000062071s
	[INFO] 10.244.0.20:43392 - 36231 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000196122s
	[INFO] 10.244.0.20:45888 - 17029 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166101s
	[INFO] 10.244.0.20:44772 - 63998 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017442s
	[INFO] 10.244.0.20:41258 - 1936 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000359377s
	[INFO] 10.244.0.20:50691 - 2308 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141691s
	[INFO] 10.244.0.20:56280 - 37669 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063186s
	[INFO] 10.244.0.20:38851 - 52577 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005769299s
	[INFO] 10.244.0.20:45170 - 63000 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.006575682s
	[INFO] 10.244.0.20:50011 - 31842 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000664707s
	[INFO] 10.244.0.20:46737 - 14536 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000943298s
	[INFO] 10.244.0.22:46027 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000201571s
	[INFO] 10.244.0.22:45175 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000188279s
	
	
	==> describe nodes <==
	Name:               addons-261813
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-261813
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9
	                    minikube.k8s.io/name=addons-261813
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T02_26_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-261813
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 02:26:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-261813
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 02:35:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 02:34:00 +0000   Tue, 30 Jul 2024 02:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 02:34:00 +0000   Tue, 30 Jul 2024 02:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 02:34:00 +0000   Tue, 30 Jul 2024 02:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 02:34:00 +0000   Tue, 30 Jul 2024 02:27:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-261813
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf74892e2c29481d80de2829e05c4450
	  System UUID:                6ca6ef4f-b25d-4926-a207-98f143624187
	  Boot ID:                    f43244bd-8d62-45f7-a4e7-2b350386049a
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  default                     hello-world-app-6778b5fc9f-sfmbr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 coredns-7db6d8ff4d-l22tb                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m23s
	  kube-system                 etcd-addons-261813                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m39s
	  kube-system                 kindnet-2j67p                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m24s
	  kube-system                 kube-apiserver-addons-261813             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 kube-controller-manager-addons-261813    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 kube-proxy-s88xb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m24s
	  kube-system                 kube-scheduler-addons-261813             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m46s (x8 over 9m46s)  kubelet          Node addons-261813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m46s (x8 over 9m46s)  kubelet          Node addons-261813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m46s (x8 over 9m46s)  kubelet          Node addons-261813 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m39s (x2 over 9m39s)  kubelet          Node addons-261813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m39s (x2 over 9m39s)  kubelet          Node addons-261813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m39s (x2 over 9m39s)  kubelet          Node addons-261813 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m26s                  node-controller  Node addons-261813 event: Registered Node addons-261813 in Controller
	  Normal  NodeReady                8m38s                  kubelet          Node addons-261813 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001068] FS-Cache: O-key=[8] 'a8eec90000000000'
	[  +0.000737] FS-Cache: N-cookie c=00000119 [p=00000110 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=0000000003cd159e
	[  +0.001078] FS-Cache: N-key=[8] 'a8eec90000000000'
	[  +0.002681] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=00000113 [p=00000110 fl=226 nc=0 na=1]
	[  +0.001024] FS-Cache: O-cookie d=0000000075a7dcb2{9p.inode} n=0000000055616a94
	[  +0.001148] FS-Cache: O-key=[8] 'a8eec90000000000'
	[  +0.000710] FS-Cache: N-cookie c=0000011a [p=00000110 fl=2 nc=0 na=1]
	[  +0.000991] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=00000000d881dbab
	[  +0.001096] FS-Cache: N-key=[8] 'a8eec90000000000'
	[  +2.751220] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=00000111 [p=00000110 fl=226 nc=0 na=1]
	[  +0.001122] FS-Cache: O-cookie d=0000000075a7dcb2{9p.inode} n=00000000387198b3
	[  +0.001118] FS-Cache: O-key=[8] 'a7eec90000000000'
	[  +0.000754] FS-Cache: N-cookie c=0000011c [p=00000110 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=000000007f70e66c
	[  +0.001203] FS-Cache: N-key=[8] 'a7eec90000000000'
	[  +0.349440] FS-Cache: Duplicate cookie detected
	[  +0.000730] FS-Cache: O-cookie c=00000116 [p=00000110 fl=226 nc=0 na=1]
	[  +0.000965] FS-Cache: O-cookie d=0000000075a7dcb2{9p.inode} n=00000000e8c4e10f
	[  +0.001078] FS-Cache: O-key=[8] 'afeec90000000000'
	[  +0.000701] FS-Cache: N-cookie c=0000011d [p=00000110 fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=0000000003cd159e
	[  +0.001050] FS-Cache: N-key=[8] 'afeec90000000000'
	
	
	==> etcd [ff022a285ff311339ae8376cf0dfa43d176eeb28aaabc972b8e57dd7635ba3b3] <==
	{"level":"info","ts":"2024-07-30T02:26:14.412255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-07-30T02:26:14.412296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-30T02:26:14.412349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-30T02:26:14.412385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-30T02:26:14.416108Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T02:26:14.417576Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-261813 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-30T02:26:14.417651Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T02:26:14.420144Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T02:26:14.420299Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T02:26:14.417748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-30T02:26:14.417843Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-30T02:26:14.42041Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-30T02:26:14.42047Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-30T02:26:14.421802Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-30T02:26:14.423072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-30T02:26:35.85306Z","caller":"traceutil/trace.go:171","msg":"trace[1156709301] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"125.826625ms","start":"2024-07-30T02:26:35.727215Z","end":"2024-07-30T02:26:35.853041Z","steps":["trace[1156709301] 'process raft request'  (duration: 125.752535ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T02:26:35.855043Z","caller":"traceutil/trace.go:171","msg":"trace[645754919] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"130.705312ms","start":"2024-07-30T02:26:35.724321Z","end":"2024-07-30T02:26:35.855026Z","steps":["trace[645754919] 'process raft request'  (duration: 126.789058ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T02:26:35.905235Z","caller":"traceutil/trace.go:171","msg":"trace[1032819210] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"135.885652ms","start":"2024-07-30T02:26:35.769333Z","end":"2024-07-30T02:26:35.905219Z","steps":["trace[1032819210] 'process raft request'  (duration: 135.855114ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T02:26:35.925931Z","caller":"traceutil/trace.go:171","msg":"trace[1385812532] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"149.062436ms","start":"2024-07-30T02:26:35.776849Z","end":"2024-07-30T02:26:35.925911Z","steps":["trace[1385812532] 'process raft request'  (duration: 128.221691ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T02:26:35.926318Z","caller":"traceutil/trace.go:171","msg":"trace[577518410] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"149.388583ms","start":"2024-07-30T02:26:35.77692Z","end":"2024-07-30T02:26:35.926309Z","steps":["trace[577518410] 'process raft request'  (duration: 128.229198ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-30T02:26:37.845831Z","caller":"traceutil/trace.go:171","msg":"trace[486773526] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"108.227146ms","start":"2024-07-30T02:26:37.737582Z","end":"2024-07-30T02:26:37.845809Z","steps":["trace[486773526] 'process raft request'  (duration: 63.44517ms)","trace[486773526] 'compare'  (duration: 44.046395ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-30T02:26:37.918144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.773504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-30T02:26:37.923568Z","caller":"traceutil/trace.go:171","msg":"trace[892392990] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:392; }","duration":"133.195332ms","start":"2024-07-30T02:26:37.79035Z","end":"2024-07-30T02:26:37.923545Z","steps":["trace[892392990] 'agreement among raft nodes before linearized reading'  (duration: 127.756995ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T02:26:38.854328Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.139165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-30T02:26:38.854455Z","caller":"traceutil/trace.go:171","msg":"trace[2016433027] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:472; }","duration":"102.270887ms","start":"2024-07-30T02:26:38.75217Z","end":"2024-07-30T02:26:38.854441Z","steps":["trace[2016433027] 'agreement among raft nodes before linearized reading'  (duration: 102.124495ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:35:59 up 1 day, 18 min,  0 users,  load average: 0.29, 0.80, 1.57
	Linux addons-261813 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [cefca930d8e8fd983be8434287aa7edb7a3a2b96bd0dd67a873171fb8a2c4a9a] <==
	I0730 02:34:51.407326       1 main.go:299] handling current node
	W0730 02:35:00.882281       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0730 02:35:00.882310       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0730 02:35:01.406801       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:35:01.406920       1 main.go:299] handling current node
	W0730 02:35:01.817532       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 02:35:01.817573       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0730 02:35:11.407180       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:35:11.407218       1 main.go:299] handling current node
	I0730 02:35:21.406859       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:35:21.406892       1 main.go:299] handling current node
	W0730 02:35:21.666115       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0730 02:35:21.666150       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0730 02:35:31.407413       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:35:31.407453       1 main.go:299] handling current node
	W0730 02:35:39.756349       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 02:35:39.756386       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0730 02:35:41.406824       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:35:41.406865       1 main.go:299] handling current node
	W0730 02:35:51.344973       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0730 02:35:51.345017       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0730 02:35:51.407279       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:35:51.407321       1 main.go:299] handling current node
	W0730 02:35:58.048890       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0730 02:35:58.048933       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [a4673af6f12b158c3d95b9bae72b1d04161c2a8a8306110a243d5f1cdd2a82a8] <==
	E0730 02:29:15.581851       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49278: use of closed network connection
	E0730 02:29:15.724157       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49292: use of closed network connection
	I0730 02:29:54.649120       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0730 02:30:23.275102       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0730 02:30:28.970484       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 02:30:28.970637       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 02:30:28.992845       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 02:30:28.996155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 02:30:29.098197       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 02:30:29.099206       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0730 02:30:29.113118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0730 02:30:29.113241       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0730 02:30:30.067174       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0730 02:30:30.113879       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0730 02:30:30.137025       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0730 02:30:35.716228       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.24.83"}
	I0730 02:30:59.010875       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0730 02:31:00.082520       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0730 02:31:04.569919       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0730 02:31:04.855393       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.179.112"}
	I0730 02:33:25.324188       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.109.27"}
	E0730 02:33:26.850521       1 watch.go:250] http2: stream closed
	E0730 02:33:27.582398       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0730 02:33:30.277901       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0730 02:33:30.287697       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [35b9e367f7359128ef601c1a71427974ea33f4da9abd1506c1c55e835e96814c] <==
	W0730 02:34:04.088403       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:34:04.088444       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:34:13.126179       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:34:13.126215       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:34:23.128164       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:34:23.128208       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:34:28.764445       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:34:28.764485       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:34:46.576441       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:34:46.576480       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:34:57.273980       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:34:57.274020       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:35:08.203016       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:35:08.203054       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:35:16.012398       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:35:16.012437       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:35:20.204486       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:35:20.204524       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:35:50.319992       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:35:50.320031       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:35:53.961950       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:35:53.961989       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0730 02:35:55.832961       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0730 02:35:55.833008       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0730 02:35:58.275890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="4.817µs"
	
	
	==> kube-proxy [93685ccfcfb0c260d0186f6cfd7f4a11e1095ecdf1b925d61d9103b397570fd5] <==
	I0730 02:26:40.272098       1 server_linux.go:69] "Using iptables proxy"
	I0730 02:26:40.637759       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0730 02:26:40.767191       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0730 02:26:40.767307       1 server_linux.go:165] "Using iptables Proxier"
	I0730 02:26:40.783741       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0730 02:26:40.783846       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0730 02:26:40.783893       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 02:26:40.786842       1 server.go:872] "Version info" version="v1.30.3"
	I0730 02:26:40.787638       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 02:26:40.804133       1 config.go:192] "Starting service config controller"
	I0730 02:26:40.811593       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 02:26:40.807974       1 config.go:101] "Starting endpoint slice config controller"
	I0730 02:26:40.814364       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 02:26:40.808927       1 config.go:319] "Starting node config controller"
	I0730 02:26:40.814473       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 02:26:40.922295       1 shared_informer.go:320] Caches are synced for node config
	I0730 02:26:40.922401       1 shared_informer.go:320] Caches are synced for service config
	I0730 02:26:40.922427       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [108e2658a310bc5ebb8d8428243ed7499d8a76a12ccb0c453dbf2bf652a785aa] <==
	W0730 02:26:18.466595       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0730 02:26:18.466644       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0730 02:26:18.466822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0730 02:26:18.467149       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0730 02:26:18.466890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0730 02:26:18.467247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0730 02:26:18.466948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0730 02:26:18.467313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0730 02:26:18.467018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0730 02:26:18.467378       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0730 02:26:18.467064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 02:26:18.467456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0730 02:26:18.467110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0730 02:26:18.467520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0730 02:26:18.467704       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0730 02:26:18.467884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0730 02:26:18.467805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0730 02:26:18.468191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0730 02:26:18.467854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0730 02:26:18.468293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 02:26:18.468100       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0730 02:26:18.468420       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0730 02:26:19.409612       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0730 02:26:19.409742       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0730 02:26:21.146996       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.815339    1543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c03fe45-826c-4247-8a5a-13cc26a231ad-webhook-cert\") pod \"1c03fe45-826c-4247-8a5a-13cc26a231ad\" (UID: \"1c03fe45-826c-4247-8a5a-13cc26a231ad\") "
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.815398    1543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dzvt\" (UniqueName: \"kubernetes.io/projected/1c03fe45-826c-4247-8a5a-13cc26a231ad-kube-api-access-6dzvt\") pod \"1c03fe45-826c-4247-8a5a-13cc26a231ad\" (UID: \"1c03fe45-826c-4247-8a5a-13cc26a231ad\") "
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.817469    1543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c03fe45-826c-4247-8a5a-13cc26a231ad-kube-api-access-6dzvt" (OuterVolumeSpecName: "kube-api-access-6dzvt") pod "1c03fe45-826c-4247-8a5a-13cc26a231ad" (UID: "1c03fe45-826c-4247-8a5a-13cc26a231ad"). InnerVolumeSpecName "kube-api-access-6dzvt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.822760    1543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c03fe45-826c-4247-8a5a-13cc26a231ad-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1c03fe45-826c-4247-8a5a-13cc26a231ad" (UID: "1c03fe45-826c-4247-8a5a-13cc26a231ad"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.846855    1543 scope.go:117] "RemoveContainer" containerID="a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963"
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.865498    1543 scope.go:117] "RemoveContainer" containerID="a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963"
	Jul 30 02:33:30 addons-261813 kubelet[1543]: E0730 02:33:30.865896    1543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963\": container with ID starting with a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963 not found: ID does not exist" containerID="a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963"
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.865933    1543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963"} err="failed to get container status \"a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963\": rpc error: code = NotFound desc = could not find container \"a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963\": container with ID starting with a603f3d0c907e8f429754ff9886d7ca442e9786870679c487e8b51f0878f2963 not found: ID does not exist"
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.916603    1543 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c03fe45-826c-4247-8a5a-13cc26a231ad-webhook-cert\") on node \"addons-261813\" DevicePath \"\""
	Jul 30 02:33:30 addons-261813 kubelet[1543]: I0730 02:33:30.916645    1543 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6dzvt\" (UniqueName: \"kubernetes.io/projected/1c03fe45-826c-4247-8a5a-13cc26a231ad-kube-api-access-6dzvt\") on node \"addons-261813\" DevicePath \"\""
	Jul 30 02:33:32 addons-261813 kubelet[1543]: I0730 02:33:32.687607    1543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c03fe45-826c-4247-8a5a-13cc26a231ad" path="/var/lib/kubelet/pods/1c03fe45-826c-4247-8a5a-13cc26a231ad/volumes"
	Jul 30 02:34:16 addons-261813 kubelet[1543]: I0730 02:34:16.686663    1543 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 30 02:34:21 addons-261813 kubelet[1543]: I0730 02:34:21.175656    1543 scope.go:117] "RemoveContainer" containerID="07fb52f251a87201ec36e9a2d5e5507a04ae6a2ee3fec7c8d055b7f838d864fe"
	Jul 30 02:34:21 addons-261813 kubelet[1543]: I0730 02:34:21.202324    1543 scope.go:117] "RemoveContainer" containerID="018f5cf2354316091bd4aa112efffb8df4c9c20a1b4e407df03268cba691ac33"
	Jul 30 02:35:29 addons-261813 kubelet[1543]: I0730 02:35:29.686270    1543 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 30 02:35:59 addons-261813 kubelet[1543]: I0730 02:35:59.659068    1543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlz5z\" (UniqueName: \"kubernetes.io/projected/fac509e3-535c-40c1-ad6c-61226795aa5e-kube-api-access-qlz5z\") pod \"fac509e3-535c-40c1-ad6c-61226795aa5e\" (UID: \"fac509e3-535c-40c1-ad6c-61226795aa5e\") "
	Jul 30 02:35:59 addons-261813 kubelet[1543]: I0730 02:35:59.659126    1543 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fac509e3-535c-40c1-ad6c-61226795aa5e-tmp-dir\") pod \"fac509e3-535c-40c1-ad6c-61226795aa5e\" (UID: \"fac509e3-535c-40c1-ad6c-61226795aa5e\") "
	Jul 30 02:35:59 addons-261813 kubelet[1543]: I0730 02:35:59.660881    1543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fac509e3-535c-40c1-ad6c-61226795aa5e-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "fac509e3-535c-40c1-ad6c-61226795aa5e" (UID: "fac509e3-535c-40c1-ad6c-61226795aa5e"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 30 02:35:59 addons-261813 kubelet[1543]: I0730 02:35:59.664866    1543 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac509e3-535c-40c1-ad6c-61226795aa5e-kube-api-access-qlz5z" (OuterVolumeSpecName: "kube-api-access-qlz5z") pod "fac509e3-535c-40c1-ad6c-61226795aa5e" (UID: "fac509e3-535c-40c1-ad6c-61226795aa5e"). InnerVolumeSpecName "kube-api-access-qlz5z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 30 02:35:59 addons-261813 kubelet[1543]: I0730 02:35:59.760467    1543 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qlz5z\" (UniqueName: \"kubernetes.io/projected/fac509e3-535c-40c1-ad6c-61226795aa5e-kube-api-access-qlz5z\") on node \"addons-261813\" DevicePath \"\""
	Jul 30 02:35:59 addons-261813 kubelet[1543]: I0730 02:35:59.760513    1543 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fac509e3-535c-40c1-ad6c-61226795aa5e-tmp-dir\") on node \"addons-261813\" DevicePath \"\""
	Jul 30 02:36:00 addons-261813 kubelet[1543]: I0730 02:36:00.147607    1543 scope.go:117] "RemoveContainer" containerID="78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c"
	Jul 30 02:36:00 addons-261813 kubelet[1543]: I0730 02:36:00.220346    1543 scope.go:117] "RemoveContainer" containerID="78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c"
	Jul 30 02:36:00 addons-261813 kubelet[1543]: E0730 02:36:00.222029    1543 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c\": container with ID starting with 78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c not found: ID does not exist" containerID="78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c"
	Jul 30 02:36:00 addons-261813 kubelet[1543]: I0730 02:36:00.222098    1543 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c"} err="failed to get container status \"78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c\": rpc error: code = NotFound desc = could not find container \"78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c\": container with ID starting with 78ca04eb9146a763eb63ef0724cbb041fbdc2cecb2ab68278edc8d61b31a141c not found: ID does not exist"
	
	
	==> storage-provisioner [a3dea84fe5c9b07b64b400072c6d1439ccfdd9c582dff8f4770a92b706c7dcf4] <==
	I0730 02:27:22.487385       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0730 02:27:22.505591       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0730 02:27:22.505727       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0730 02:27:22.531948       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0730 02:27:22.532491       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa5818b7-4691-4ab8-8ae6-79fc45b09a77", APIVersion:"v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-261813_8cc6457e-2fa4-461d-ae7a-ff1f1e7c4e88 became leader
	I0730 02:27:22.532710       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-261813_8cc6457e-2fa4-461d-ae7a-ff1f1e7c4e88!
	I0730 02:27:22.640193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-261813_8cc6457e-2fa4-461d-ae7a-ff1f1e7c4e88!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-261813 -n addons-261813
helpers_test.go:261: (dbg) Run:  kubectl --context addons-261813 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (310.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (127.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-642542 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0730 02:49:21.809319 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
E0730 02:49:49.493040 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-642542 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m2.57123365s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-642542       NotReady   control-plane   10m     v1.30.3
	ha-642542-m02   Ready      control-plane   9m51s   v1.30.3
	ha-642542-m04   Ready      <none>          7m28s   v1.30.3

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-642542
helpers_test.go:235: (dbg) docker inspect ha-642542:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "89e6f2fdaeb97e179b135e3b11063558312a7eeb528ccee51d2941fa5ee86cd3",
	        "Created": "2024-07-30T02:40:22.747112904Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1659075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-30T02:49:09.975761939Z",
	            "FinishedAt": "2024-07-30T02:49:09.100255607Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/89e6f2fdaeb97e179b135e3b11063558312a7eeb528ccee51d2941fa5ee86cd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/89e6f2fdaeb97e179b135e3b11063558312a7eeb528ccee51d2941fa5ee86cd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/89e6f2fdaeb97e179b135e3b11063558312a7eeb528ccee51d2941fa5ee86cd3/hosts",
	        "LogPath": "/var/lib/docker/containers/89e6f2fdaeb97e179b135e3b11063558312a7eeb528ccee51d2941fa5ee86cd3/89e6f2fdaeb97e179b135e3b11063558312a7eeb528ccee51d2941fa5ee86cd3-json.log",
	        "Name": "/ha-642542",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-642542:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-642542",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f45064b651bc8f89e1c70d32c56f7e0380465e5ad4c9379a2da5c3adff5c4dd8-init/diff:/var/lib/docker/overlay2/acd0679734de498ee4da989a39c292c935753fd7c8a4808d283ba27465852ac6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f45064b651bc8f89e1c70d32c56f7e0380465e5ad4c9379a2da5c3adff5c4dd8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f45064b651bc8f89e1c70d32c56f7e0380465e5ad4c9379a2da5c3adff5c4dd8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f45064b651bc8f89e1c70d32c56f7e0380465e5ad4c9379a2da5c3adff5c4dd8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-642542",
	                "Source": "/var/lib/docker/volumes/ha-642542/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-642542",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-642542",
	                "name.minikube.sigs.k8s.io": "ha-642542",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93730ef48c9b0725bbfb2ffdfdb0fc91a9a18ae90bc296e5e9ee3bd3a3e4dde4",
	            "SandboxKey": "/var/run/docker/netns/93730ef48c9b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38945"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-642542": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "62b0a8e5c8e754ea4b12cb321113b9888c3821e2809e75022fcb5f8544f44e51",
	                    "EndpointID": "fa7659ba9ba232037ab96f3bfbdc2dd073f8e56eb5dcd9094b1f636407242f5d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-642542",
	                        "89e6f2fdaeb9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-642542 -n ha-642542
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-642542 logs -n 25: (1.979884279s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-642542 cp ha-642542-m03:/home/docker/cp-test.txt                             | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542-m04:/home/docker/cp-test_ha-642542-m03_ha-642542-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-642542 ssh -n                                                                | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-642542 ssh -n ha-642542-m04 sudo cat                                         | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | /home/docker/cp-test_ha-642542-m03_ha-642542-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-642542 cp testdata/cp-test.txt                                               | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-642542 ssh -n                                                                | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-642542 cp ha-642542-m04:/home/docker/cp-test.txt                             | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile801376715/001/cp-test_ha-642542-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-642542 ssh -n                                                                | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-642542 cp ha-642542-m04:/home/docker/cp-test.txt                             | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542:/home/docker/cp-test_ha-642542-m04_ha-642542.txt                      |           |         |         |                     |                     |
	| ssh     | ha-642542 ssh -n                                                                | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-642542 ssh -n ha-642542 sudo cat                                             | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | /home/docker/cp-test_ha-642542-m04_ha-642542.txt                                |           |         |         |                     |                     |
	| cp      | ha-642542 cp ha-642542-m04:/home/docker/cp-test.txt                             | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542-m02:/home/docker/cp-test_ha-642542-m04_ha-642542-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-642542 ssh -n                                                                | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-642542 ssh -n ha-642542-m02 sudo cat                                         | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | /home/docker/cp-test_ha-642542-m04_ha-642542-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-642542 cp ha-642542-m04:/home/docker/cp-test.txt                             | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542-m03:/home/docker/cp-test_ha-642542-m04_ha-642542-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-642542 ssh -n                                                                | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | ha-642542-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-642542 ssh -n ha-642542-m03 sudo cat                                         | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | /home/docker/cp-test_ha-642542-m04_ha-642542-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-642542 node stop m02 -v=7                                                    | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:44 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-642542 node start m02 -v=7                                                   | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:44 UTC | 30 Jul 24 02:45 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-642542 -v=7                                                          | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:45 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-642542 -v=7                                                               | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:45 UTC | 30 Jul 24 02:45 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-642542 --wait=true -v=7                                                   | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:45 UTC | 30 Jul 24 02:48 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-642542                                                               | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:48 UTC |                     |
	| node    | ha-642542 node delete m03 -v=7                                                  | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:48 UTC | 30 Jul 24 02:48 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-642542 stop -v=7                                                             | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:48 UTC | 30 Jul 24 02:49 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-642542 --wait=true                                                        | ha-642542 | jenkins | v1.33.1 | 30 Jul 24 02:49 UTC | 30 Jul 24 02:51 UTC |
	|         | -v=7 --alsologtostderr                                                          |           |         |         |                     |                     |
	|         | --driver=docker                                                                 |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                        |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 02:49:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 02:49:09.505904 1658870 out.go:291] Setting OutFile to fd 1 ...
	I0730 02:49:09.506108 1658870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:49:09.506136 1658870 out.go:304] Setting ErrFile to fd 2...
	I0730 02:49:09.506154 1658870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:49:09.506414 1658870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 02:49:09.506829 1658870 out.go:298] Setting JSON to false
	I0730 02:49:09.507754 1658870 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":88295,"bootTime":1722219454,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0730 02:49:09.507849 1658870 start.go:139] virtualization:  
	I0730 02:49:09.510901 1658870 out.go:177] * [ha-642542] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0730 02:49:09.513378 1658870 out.go:177]   - MINIKUBE_LOCATION=19348
	I0730 02:49:09.513508 1658870 notify.go:220] Checking for updates...
	I0730 02:49:09.516983 1658870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 02:49:09.518647 1658870 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:49:09.520444 1658870 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	I0730 02:49:09.522026 1658870 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0730 02:49:09.523846 1658870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 02:49:09.526495 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:49:09.527104 1658870 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 02:49:09.549534 1658870 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0730 02:49:09.549678 1658870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:49:09.611561 1658870 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-07-30 02:49:09.601639861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:49:09.611681 1658870 docker.go:307] overlay module found
	I0730 02:49:09.614873 1658870 out.go:177] * Using the docker driver based on existing profile
	I0730 02:49:09.616559 1658870 start.go:297] selected driver: docker
	I0730 02:49:09.616580 1658870 start.go:901] validating driver "docker" against &{Name:ha-642542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-642542 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:49:09.616734 1658870 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 02:49:09.616835 1658870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:49:09.685045 1658870 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-07-30 02:49:09.675315418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:49:09.685462 1658870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 02:49:09.685493 1658870 cni.go:84] Creating CNI manager for ""
	I0730 02:49:09.685500 1658870 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0730 02:49:09.685562 1658870 start.go:340] cluster config:
	{Name:ha-642542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-642542 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0730 02:49:09.687575 1658870 out.go:177] * Starting "ha-642542" primary control-plane node in "ha-642542" cluster
	I0730 02:49:09.689351 1658870 cache.go:121] Beginning downloading kic base image for docker with crio
	I0730 02:49:09.691148 1658870 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0730 02:49:09.692861 1658870 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:49:09.692919 1658870 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0730 02:49:09.692931 1658870 cache.go:56] Caching tarball of preloaded images
	I0730 02:49:09.693018 1658870 preload.go:172] Found /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0730 02:49:09.693041 1658870 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 02:49:09.693188 1658870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/config.json ...
	I0730 02:49:09.693429 1658870 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	W0730 02:49:09.713540 1658870 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0730 02:49:09.713565 1658870 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0730 02:49:09.713667 1658870 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0730 02:49:09.713692 1658870 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0730 02:49:09.713697 1658870 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0730 02:49:09.713705 1658870 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0730 02:49:09.713714 1658870 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0730 02:49:09.715139 1658870 image.go:273] response: 
	I0730 02:49:09.837685 1658870 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0730 02:49:09.837726 1658870 cache.go:194] Successfully downloaded all kic artifacts
	I0730 02:49:09.837777 1658870 start.go:360] acquireMachinesLock for ha-642542: {Name:mk35e1a23c3ff3d417c2cb04be42a761507bc88e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 02:49:09.837852 1658870 start.go:364] duration metric: took 44.725µs to acquireMachinesLock for "ha-642542"
	I0730 02:49:09.837878 1658870 start.go:96] Skipping create...Using existing machine configuration
	I0730 02:49:09.837888 1658870 fix.go:54] fixHost starting: 
	I0730 02:49:09.838157 1658870 cli_runner.go:164] Run: docker container inspect ha-642542 --format={{.State.Status}}
	I0730 02:49:09.854165 1658870 fix.go:112] recreateIfNeeded on ha-642542: state=Stopped err=<nil>
	W0730 02:49:09.854199 1658870 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 02:49:09.856285 1658870 out.go:177] * Restarting existing docker container for "ha-642542" ...
	I0730 02:49:09.858080 1658870 cli_runner.go:164] Run: docker start ha-642542
	I0730 02:49:10.148521 1658870 cli_runner.go:164] Run: docker container inspect ha-642542 --format={{.State.Status}}
	I0730 02:49:10.170906 1658870 kic.go:430] container "ha-642542" state is running.
	I0730 02:49:10.171300 1658870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542
	I0730 02:49:10.198313 1658870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/config.json ...
	I0730 02:49:10.198559 1658870 machine.go:94] provisionDockerMachine start ...
	I0730 02:49:10.198638 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:49:10.219768 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:49:10.220121 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38943 <nil> <nil>}
	I0730 02:49:10.220152 1658870 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 02:49:10.220795 1658870 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0730 02:49:13.359350 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-642542
	
	I0730 02:49:13.359374 1658870 ubuntu.go:169] provisioning hostname "ha-642542"
	I0730 02:49:13.359438 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:49:13.375715 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:49:13.376040 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38943 <nil> <nil>}
	I0730 02:49:13.376055 1658870 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-642542 && echo "ha-642542" | sudo tee /etc/hostname
	I0730 02:49:13.519375 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-642542
	
	I0730 02:49:13.519454 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:49:13.536913 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:49:13.539282 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38943 <nil> <nil>}
	I0730 02:49:13.539313 1658870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-642542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-642542/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-642542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 02:49:13.671703 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 02:49:13.671730 1658870 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19348-1592571/.minikube CaCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19348-1592571/.minikube}
	I0730 02:49:13.671757 1658870 ubuntu.go:177] setting up certificates
	I0730 02:49:13.671767 1658870 provision.go:84] configureAuth start
	I0730 02:49:13.671825 1658870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542
	I0730 02:49:13.688469 1658870 provision.go:143] copyHostCerts
	I0730 02:49:13.688516 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem
	I0730 02:49:13.688547 1658870 exec_runner.go:144] found /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem, removing ...
	I0730 02:49:13.688563 1658870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem
	I0730 02:49:13.688641 1658870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem (1078 bytes)
	I0730 02:49:13.688731 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem
	I0730 02:49:13.688757 1658870 exec_runner.go:144] found /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem, removing ...
	I0730 02:49:13.688765 1658870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem
	I0730 02:49:13.688792 1658870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem (1123 bytes)
	I0730 02:49:13.688839 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem
	I0730 02:49:13.688860 1658870 exec_runner.go:144] found /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem, removing ...
	I0730 02:49:13.688868 1658870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem
	I0730 02:49:13.688893 1658870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem (1675 bytes)
	I0730 02:49:13.688946 1658870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem org=jenkins.ha-642542 san=[127.0.0.1 192.168.49.2 ha-642542 localhost minikube]
	I0730 02:49:14.175493 1658870 provision.go:177] copyRemoteCerts
	I0730 02:49:14.175570 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 02:49:14.175610 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:49:14.192236 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38943 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542/id_rsa Username:docker}
	I0730 02:49:14.284883 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 02:49:14.284943 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0730 02:49:14.308596 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 02:49:14.308667 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0730 02:49:14.331924 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 02:49:14.332048 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0730 02:49:14.355285 1658870 provision.go:87] duration metric: took 683.50302ms to configureAuth
	I0730 02:49:14.355312 1658870 ubuntu.go:193] setting minikube options for container-runtime
	I0730 02:49:14.355540 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:49:14.355659 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:49:14.372302 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:49:14.372556 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38943 <nil> <nil>}
	I0730 02:49:14.372578 1658870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 02:49:14.769290 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 02:49:14.769355 1658870 machine.go:97] duration metric: took 4.570777495s to provisionDockerMachine
	I0730 02:49:14.769382 1658870 start.go:293] postStartSetup for "ha-642542" (driver="docker")
	I0730 02:49:14.769412 1658870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 02:49:14.769518 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 02:49:14.769599 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:49:14.796198 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38943 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542/id_rsa Username:docker}
	I0730 02:49:14.892752 1658870 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 02:49:14.895953 1658870 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0730 02:49:14.896012 1658870 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0730 02:49:14.896023 1658870 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0730 02:49:14.896034 1658870 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0730 02:49:14.896048 1658870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19348-1592571/.minikube/addons for local assets ...
	I0730 02:49:14.896109 1658870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19348-1592571/.minikube/files for local assets ...
	I0730 02:49:14.896199 1658870 filesync.go:149] local asset: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem -> 15979582.pem in /etc/ssl/certs
	I0730 02:49:14.896210 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem -> /etc/ssl/certs/15979582.pem
	I0730 02:49:14.896317 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 02:49:14.904895 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem --> /etc/ssl/certs/15979582.pem (1708 bytes)
	I0730 02:49:14.928583 1658870 start.go:296] duration metric: took 159.167926ms for postStartSetup
	I0730 02:49:14.928669 1658870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 02:49:14.928720 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:49:14.945579 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38943 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542/id_rsa Username:docker}
	I0730 02:49:15.037908 1658870 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0730 02:49:15.043576 1658870 fix.go:56] duration metric: took 5.205677284s for fixHost
	I0730 02:49:15.043604 1658870 start.go:83] releasing machines lock for "ha-642542", held for 5.205738501s
	I0730 02:49:15.043726 1658870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542
	I0730 02:49:15.067120 1658870 ssh_runner.go:195] Run: cat /version.json
	I0730 02:49:15.067178 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:49:15.067186 1658870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 02:49:15.067270 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:49:15.085866 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38943 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542/id_rsa Username:docker}
	I0730 02:49:15.098876 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38943 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542/id_rsa Username:docker}
	I0730 02:49:15.183864 1658870 ssh_runner.go:195] Run: systemctl --version
	I0730 02:49:15.314670 1658870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 02:49:15.455283 1658870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0730 02:49:15.459601 1658870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 02:49:15.468524 1658870 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0730 02:49:15.468603 1658870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 02:49:15.477236 1658870 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0730 02:49:15.477311 1658870 start.go:495] detecting cgroup driver to use...
	I0730 02:49:15.477349 1658870 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0730 02:49:15.477404 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 02:49:15.489397 1658870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 02:49:15.500608 1658870 docker.go:217] disabling cri-docker service (if available) ...
	I0730 02:49:15.500675 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 02:49:15.513823 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 02:49:15.525455 1658870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 02:49:15.613784 1658870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 02:49:15.704624 1658870 docker.go:233] disabling docker service ...
	I0730 02:49:15.704732 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 02:49:15.718317 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 02:49:15.730359 1658870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 02:49:15.809017 1658870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 02:49:15.894902 1658870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 02:49:15.906430 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 02:49:15.922377 1658870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 02:49:15.922495 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:15.932397 1658870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 02:49:15.932506 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:15.942536 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:15.952085 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:15.961776 1658870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 02:49:15.970742 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:15.980516 1658870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:15.990563 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:16.000858 1658870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 02:49:16.013211 1658870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 02:49:16.021690 1658870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:49:16.105403 1658870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 02:49:16.238850 1658870 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 02:49:16.238921 1658870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 02:49:16.242421 1658870 start.go:563] Will wait 60s for crictl version
	I0730 02:49:16.242487 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:49:16.245996 1658870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 02:49:16.284123 1658870 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0730 02:49:16.284215 1658870 ssh_runner.go:195] Run: crio --version
	I0730 02:49:16.326049 1658870 ssh_runner.go:195] Run: crio --version
	I0730 02:49:16.370492 1658870 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0730 02:49:16.372020 1658870 cli_runner.go:164] Run: docker network inspect ha-642542 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0730 02:49:16.390192 1658870 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0730 02:49:16.393698 1658870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 02:49:16.404116 1658870 kubeadm.go:883] updating cluster {Name:ha-642542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-642542 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0730 02:49:16.404270 1658870 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:49:16.404325 1658870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 02:49:16.456008 1658870 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 02:49:16.456035 1658870 crio.go:433] Images already preloaded, skipping extraction
	I0730 02:49:16.456096 1658870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0730 02:49:16.496241 1658870 crio.go:514] all images are preloaded for cri-o runtime.
	I0730 02:49:16.496264 1658870 cache_images.go:84] Images are preloaded, skipping loading
	I0730 02:49:16.496274 1658870 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0730 02:49:16.496429 1658870 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-642542 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-642542 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 02:49:16.496535 1658870 ssh_runner.go:195] Run: crio config
	I0730 02:49:16.568114 1658870 cni.go:84] Creating CNI manager for ""
	I0730 02:49:16.568133 1658870 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0730 02:49:16.568142 1658870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0730 02:49:16.568167 1658870 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-642542 NodeName:ha-642542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0730 02:49:16.568309 1658870 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-642542"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0730 02:49:16.568330 1658870 kube-vip.go:115] generating kube-vip config ...
	I0730 02:49:16.568392 1658870 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0730 02:49:16.581111 1658870 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 02:49:16.581219 1658870 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0730 02:49:16.581284 1658870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 02:49:16.590466 1658870 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 02:49:16.590541 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0730 02:49:16.600022 1658870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0730 02:49:16.617354 1658870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 02:49:16.635622 1658870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0730 02:49:16.653746 1658870 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0730 02:49:16.672052 1658870 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0730 02:49:16.675477 1658870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 02:49:16.686088 1658870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:49:16.781203 1658870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 02:49:16.795217 1658870 certs.go:68] Setting up /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542 for IP: 192.168.49.2
	I0730 02:49:16.795291 1658870 certs.go:194] generating shared ca certs ...
	I0730 02:49:16.795313 1658870 certs.go:226] acquiring lock for ca certs: {Name:mkd188f515cf1f581cef2c6a3cc946da59d73d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:49:16.795472 1658870 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key
	I0730 02:49:16.795525 1658870 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key
	I0730 02:49:16.795538 1658870 certs.go:256] generating profile certs ...
	I0730 02:49:16.795619 1658870 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/client.key
	I0730 02:49:16.795652 1658870 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.key.79c226c3
	I0730 02:49:16.795672 1658870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.crt.79c226c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0730 02:49:17.269537 1658870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.crt.79c226c3 ...
	I0730 02:49:17.269570 1658870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.crt.79c226c3: {Name:mkdc5fceaa4c892a1b62d7dc18cfea82c51af652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:49:17.269773 1658870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.key.79c226c3 ...
	I0730 02:49:17.269788 1658870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.key.79c226c3: {Name:mk728bb2345c4946447f451336034642066d8f8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:49:17.269881 1658870 certs.go:381] copying /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.crt.79c226c3 -> /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.crt
	I0730 02:49:17.270021 1658870 certs.go:385] copying /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.key.79c226c3 -> /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.key
	I0730 02:49:17.270153 1658870 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/proxy-client.key
	I0730 02:49:17.270172 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 02:49:17.270187 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 02:49:17.270199 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 02:49:17.270214 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 02:49:17.270226 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 02:49:17.270241 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 02:49:17.270258 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 02:49:17.270277 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 02:49:17.270326 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958.pem (1338 bytes)
	W0730 02:49:17.270358 1658870 certs.go:480] ignoring /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958_empty.pem, impossibly tiny 0 bytes
	I0730 02:49:17.270373 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 02:49:17.270399 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem (1078 bytes)
	I0730 02:49:17.270425 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem (1123 bytes)
	I0730 02:49:17.270450 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem (1675 bytes)
	I0730 02:49:17.270497 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem (1708 bytes)
	I0730 02:49:17.270528 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958.pem -> /usr/share/ca-certificates/1597958.pem
	I0730 02:49:17.270546 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem -> /usr/share/ca-certificates/15979582.pem
	I0730 02:49:17.270558 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:49:17.271175 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 02:49:17.297689 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0730 02:49:17.321334 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 02:49:17.345610 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0730 02:49:17.368948 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0730 02:49:17.393210 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0730 02:49:17.417637 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 02:49:17.442095 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 02:49:17.466461 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958.pem --> /usr/share/ca-certificates/1597958.pem (1338 bytes)
	I0730 02:49:17.491279 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem --> /usr/share/ca-certificates/15979582.pem (1708 bytes)
	I0730 02:49:17.516094 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 02:49:17.541123 1658870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0730 02:49:17.558942 1658870 ssh_runner.go:195] Run: openssl version
	I0730 02:49:17.564941 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1597958.pem && ln -fs /usr/share/ca-certificates/1597958.pem /etc/ssl/certs/1597958.pem"
	I0730 02:49:17.574763 1658870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1597958.pem
	I0730 02:49:17.578151 1658870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 02:37 /usr/share/ca-certificates/1597958.pem
	I0730 02:49:17.578255 1658870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1597958.pem
	I0730 02:49:17.586487 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1597958.pem /etc/ssl/certs/51391683.0"
	I0730 02:49:17.595268 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15979582.pem && ln -fs /usr/share/ca-certificates/15979582.pem /etc/ssl/certs/15979582.pem"
	I0730 02:49:17.604831 1658870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15979582.pem
	I0730 02:49:17.608304 1658870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 02:37 /usr/share/ca-certificates/15979582.pem
	I0730 02:49:17.608408 1658870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15979582.pem
	I0730 02:49:17.615160 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15979582.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 02:49:17.623760 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 02:49:17.633091 1658870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:49:17.636781 1658870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:49:17.636846 1658870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:49:17.643369 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 02:49:17.652360 1658870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 02:49:17.655897 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0730 02:49:17.662562 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0730 02:49:17.669302 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0730 02:49:17.675902 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0730 02:49:17.682495 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0730 02:49:17.689369 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0730 02:49:17.696117 1658870 kubeadm.go:392] StartCluster: {Name:ha-642542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-642542 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:49:17.696259 1658870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0730 02:49:17.696366 1658870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0730 02:49:17.734143 1658870 cri.go:89] found id: ""
	I0730 02:49:17.734215 1658870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0730 02:49:17.742913 1658870 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0730 02:49:17.742937 1658870 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0730 02:49:17.742989 1658870 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0730 02:49:17.751115 1658870 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0730 02:49:17.751566 1658870 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-642542" does not appear in /home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:49:17.751667 1658870 kubeconfig.go:62] /home/jenkins/minikube-integration/19348-1592571/kubeconfig needs updating (will repair): [kubeconfig missing "ha-642542" cluster setting kubeconfig missing "ha-642542" context setting]
	I0730 02:49:17.751922 1658870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/kubeconfig: {Name:mk572b463a11a946de92ccc491c42330cd76de64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:49:17.752337 1658870 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:49:17.752576 1658870 kapi.go:59] client config for ha-642542: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/client.crt", KeyFile:"/home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/client.key", CAFile:"/home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17a5cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0730 02:49:17.753189 1658870 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0730 02:49:17.753263 1658870 cert_rotation.go:137] Starting client certificate rotation controller
	I0730 02:49:17.761539 1658870 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0730 02:49:17.761615 1658870 kubeadm.go:597] duration metric: took 18.67124ms to restartPrimaryControlPlane
	I0730 02:49:17.761633 1658870 kubeadm.go:394] duration metric: took 65.524455ms to StartCluster
	I0730 02:49:17.761656 1658870 settings.go:142] acquiring lock: {Name:mk63e25bcb01770839277a929f9ba49ce5be4445 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:49:17.761725 1658870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:49:17.762367 1658870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/kubeconfig: {Name:mk572b463a11a946de92ccc491c42330cd76de64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:49:17.762567 1658870 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 02:49:17.762594 1658870 start.go:241] waiting for startup goroutines ...
	I0730 02:49:17.762602 1658870 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0730 02:49:17.763028 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:49:17.765676 1658870 out.go:177] * Enabled addons: 
	I0730 02:49:17.767942 1658870 addons.go:510] duration metric: took 5.335222ms for enable addons: enabled=[]
	I0730 02:49:17.768162 1658870 start.go:246] waiting for cluster config update ...
	I0730 02:49:17.768179 1658870 start.go:255] writing updated cluster config ...
	I0730 02:49:17.770264 1658870 out.go:177] 
	I0730 02:49:17.772258 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:49:17.772367 1658870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/config.json ...
	I0730 02:49:17.774582 1658870 out.go:177] * Starting "ha-642542-m02" control-plane node in "ha-642542" cluster
	I0730 02:49:17.776265 1658870 cache.go:121] Beginning downloading kic base image for docker with crio
	I0730 02:49:17.778086 1658870 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0730 02:49:17.780168 1658870 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:49:17.780203 1658870 cache.go:56] Caching tarball of preloaded images
	I0730 02:49:17.780256 1658870 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0730 02:49:17.780312 1658870 preload.go:172] Found /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0730 02:49:17.780326 1658870 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 02:49:17.780467 1658870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/config.json ...
	W0730 02:49:17.797174 1658870 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0730 02:49:17.797194 1658870 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0730 02:49:17.797280 1658870 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0730 02:49:17.797302 1658870 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0730 02:49:17.797310 1658870 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0730 02:49:17.797318 1658870 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0730 02:49:17.797324 1658870 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0730 02:49:17.798431 1658870 image.go:273] response: 
	I0730 02:49:17.913800 1658870 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0730 02:49:17.913836 1658870 cache.go:194] Successfully downloaded all kic artifacts
	I0730 02:49:17.913867 1658870 start.go:360] acquireMachinesLock for ha-642542-m02: {Name:mkd136a43ceb144299db9ccc18cb543f90a4b008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 02:49:17.913930 1658870 start.go:364] duration metric: took 41.541µs to acquireMachinesLock for "ha-642542-m02"
	I0730 02:49:17.913958 1658870 start.go:96] Skipping create...Using existing machine configuration
	I0730 02:49:17.913968 1658870 fix.go:54] fixHost starting: m02
	I0730 02:49:17.914301 1658870 cli_runner.go:164] Run: docker container inspect ha-642542-m02 --format={{.State.Status}}
	I0730 02:49:17.930661 1658870 fix.go:112] recreateIfNeeded on ha-642542-m02: state=Stopped err=<nil>
	W0730 02:49:17.930697 1658870 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 02:49:17.932856 1658870 out.go:177] * Restarting existing docker container for "ha-642542-m02" ...
	I0730 02:49:17.934842 1658870 cli_runner.go:164] Run: docker start ha-642542-m02
	I0730 02:49:18.235061 1658870 cli_runner.go:164] Run: docker container inspect ha-642542-m02 --format={{.State.Status}}
	I0730 02:49:18.258298 1658870 kic.go:430] container "ha-642542-m02" state is running.
	I0730 02:49:18.259048 1658870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542-m02
	I0730 02:49:18.279416 1658870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/config.json ...
	I0730 02:49:18.279719 1658870 machine.go:94] provisionDockerMachine start ...
	I0730 02:49:18.279804 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m02
	I0730 02:49:18.300022 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:49:18.300256 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38948 <nil> <nil>}
	I0730 02:49:18.300269 1658870 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 02:49:18.300929 1658870 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55210->127.0.0.1:38948: read: connection reset by peer
	I0730 02:49:21.488826 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-642542-m02
	
	I0730 02:49:21.488852 1658870 ubuntu.go:169] provisioning hostname "ha-642542-m02"
	I0730 02:49:21.488985 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m02
	I0730 02:49:21.520935 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:49:21.521201 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38948 <nil> <nil>}
	I0730 02:49:21.521212 1658870 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-642542-m02 && echo "ha-642542-m02" | sudo tee /etc/hostname
	I0730 02:49:21.722400 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-642542-m02
	
	I0730 02:49:21.722538 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m02
	I0730 02:49:21.747735 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:49:21.748002 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38948 <nil> <nil>}
	I0730 02:49:21.748019 1658870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-642542-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-642542-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-642542-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 02:49:21.932505 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 02:49:21.932531 1658870 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19348-1592571/.minikube CaCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19348-1592571/.minikube}
	I0730 02:49:21.932547 1658870 ubuntu.go:177] setting up certificates
	I0730 02:49:21.932568 1658870 provision.go:84] configureAuth start
	I0730 02:49:21.932631 1658870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542-m02
	I0730 02:49:21.961491 1658870 provision.go:143] copyHostCerts
	I0730 02:49:21.961533 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem
	I0730 02:49:21.961568 1658870 exec_runner.go:144] found /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem, removing ...
	I0730 02:49:21.961575 1658870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem
	I0730 02:49:21.961657 1658870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem (1123 bytes)
	I0730 02:49:21.961746 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem
	I0730 02:49:21.961762 1658870 exec_runner.go:144] found /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem, removing ...
	I0730 02:49:21.961768 1658870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem
	I0730 02:49:21.961794 1658870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem (1675 bytes)
	I0730 02:49:21.961840 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem
	I0730 02:49:21.961856 1658870 exec_runner.go:144] found /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem, removing ...
	I0730 02:49:21.961860 1658870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem
	I0730 02:49:21.961882 1658870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem (1078 bytes)
	I0730 02:49:21.961941 1658870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem org=jenkins.ha-642542-m02 san=[127.0.0.1 192.168.49.3 ha-642542-m02 localhost minikube]
	I0730 02:49:23.110869 1658870 provision.go:177] copyRemoteCerts
	I0730 02:49:23.110947 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 02:49:23.110998 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m02
	I0730 02:49:23.128446 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38948 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m02/id_rsa Username:docker}
	I0730 02:49:23.230129 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 02:49:23.230204 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0730 02:49:23.260414 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 02:49:23.260481 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 02:49:23.293882 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 02:49:23.293950 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0730 02:49:23.323523 1658870 provision.go:87] duration metric: took 1.390931489s to configureAuth
	I0730 02:49:23.323552 1658870 ubuntu.go:193] setting minikube options for container-runtime
	I0730 02:49:23.323825 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:49:23.323952 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m02
	I0730 02:49:23.353674 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:49:23.353910 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38948 <nil> <nil>}
	I0730 02:49:23.353926 1658870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 02:49:23.931443 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 02:49:23.931472 1658870 machine.go:97] duration metric: took 5.651742733s to provisionDockerMachine
	I0730 02:49:23.931483 1658870 start.go:293] postStartSetup for "ha-642542-m02" (driver="docker")
	I0730 02:49:23.931496 1658870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 02:49:23.931592 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 02:49:23.931636 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m02
	I0730 02:49:23.958540 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38948 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m02/id_rsa Username:docker}
	I0730 02:49:24.071245 1658870 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 02:49:24.075664 1658870 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0730 02:49:24.075699 1658870 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0730 02:49:24.075710 1658870 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0730 02:49:24.075717 1658870 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0730 02:49:24.075728 1658870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19348-1592571/.minikube/addons for local assets ...
	I0730 02:49:24.075791 1658870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19348-1592571/.minikube/files for local assets ...
	I0730 02:49:24.075868 1658870 filesync.go:149] local asset: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem -> 15979582.pem in /etc/ssl/certs
	I0730 02:49:24.075875 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem -> /etc/ssl/certs/15979582.pem
	I0730 02:49:24.076029 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 02:49:24.095063 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem --> /etc/ssl/certs/15979582.pem (1708 bytes)
	I0730 02:49:24.139148 1658870 start.go:296] duration metric: took 207.650892ms for postStartSetup
	I0730 02:49:24.139290 1658870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 02:49:24.139351 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m02
	I0730 02:49:24.165491 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38948 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m02/id_rsa Username:docker}
	I0730 02:49:24.364780 1658870 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0730 02:49:24.388586 1658870 fix.go:56] duration metric: took 6.47460966s for fixHost
	I0730 02:49:24.388609 1658870 start.go:83] releasing machines lock for "ha-642542-m02", held for 6.474664593s
	I0730 02:49:24.388680 1658870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542-m02
	I0730 02:49:24.425752 1658870 out.go:177] * Found network options:
	I0730 02:49:24.427931 1658870 out.go:177]   - NO_PROXY=192.168.49.2
	W0730 02:49:24.430160 1658870 proxy.go:119] fail to check proxy env: Error ip not in block
	W0730 02:49:24.430205 1658870 proxy.go:119] fail to check proxy env: Error ip not in block
	I0730 02:49:24.430285 1658870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 02:49:24.430333 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m02
	I0730 02:49:24.430568 1658870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 02:49:24.430632 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m02
	I0730 02:49:24.458011 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38948 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m02/id_rsa Username:docker}
	I0730 02:49:24.473581 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38948 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m02/id_rsa Username:docker}
	I0730 02:49:25.112660 1658870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0730 02:49:25.136132 1658870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 02:49:25.162804 1658870 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0730 02:49:25.162898 1658870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 02:49:25.196510 1658870 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0730 02:49:25.196535 1658870 start.go:495] detecting cgroup driver to use...
	I0730 02:49:25.196577 1658870 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0730 02:49:25.196640 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 02:49:25.237367 1658870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 02:49:25.286075 1658870 docker.go:217] disabling cri-docker service (if available) ...
	I0730 02:49:25.286141 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 02:49:25.324001 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 02:49:25.368491 1658870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 02:49:25.694591 1658870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 02:49:26.010646 1658870 docker.go:233] disabling docker service ...
	I0730 02:49:26.010816 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 02:49:26.071502 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 02:49:26.153838 1658870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 02:49:26.439920 1658870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 02:49:26.737101 1658870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 02:49:26.753422 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 02:49:26.817694 1658870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 02:49:26.817783 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:26.838993 1658870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 02:49:26.839075 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:26.855990 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:26.873682 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:26.890298 1658870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 02:49:26.918420 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:26.934862 1658870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:26.966667 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:49:26.981697 1658870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 02:49:27.001925 1658870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 02:49:27.020086 1658870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:49:27.307582 1658870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 02:49:28.708644 1658870 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.401011224s)
	I0730 02:49:28.708684 1658870 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 02:49:28.708768 1658870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 02:49:28.716349 1658870 start.go:563] Will wait 60s for crictl version
	I0730 02:49:28.716447 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:49:28.724543 1658870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 02:49:28.816092 1658870 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0730 02:49:28.816231 1658870 ssh_runner.go:195] Run: crio --version
	I0730 02:49:28.876656 1658870 ssh_runner.go:195] Run: crio --version
	I0730 02:49:28.958180 1658870 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0730 02:49:28.960153 1658870 out.go:177]   - env NO_PROXY=192.168.49.2
	I0730 02:49:28.962451 1658870 cli_runner.go:164] Run: docker network inspect ha-642542 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0730 02:49:28.993723 1658870 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0730 02:49:28.997482 1658870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 02:49:29.011839 1658870 mustload.go:65] Loading cluster: ha-642542
	I0730 02:49:29.012104 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:49:29.012422 1658870 cli_runner.go:164] Run: docker container inspect ha-642542 --format={{.State.Status}}
	I0730 02:49:29.037978 1658870 host.go:66] Checking if "ha-642542" exists ...
	I0730 02:49:29.038263 1658870 certs.go:68] Setting up /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542 for IP: 192.168.49.3
	I0730 02:49:29.038278 1658870 certs.go:194] generating shared ca certs ...
	I0730 02:49:29.038293 1658870 certs.go:226] acquiring lock for ca certs: {Name:mkd188f515cf1f581cef2c6a3cc946da59d73d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:49:29.038444 1658870 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key
	I0730 02:49:29.038491 1658870 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key
	I0730 02:49:29.038502 1658870 certs.go:256] generating profile certs ...
	I0730 02:49:29.038577 1658870 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/client.key
	I0730 02:49:29.038650 1658870 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.key.75c5dd53
	I0730 02:49:29.038690 1658870 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/proxy-client.key
	I0730 02:49:29.038704 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 02:49:29.038718 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 02:49:29.038733 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 02:49:29.038745 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 02:49:29.038760 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0730 02:49:29.038772 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0730 02:49:29.038788 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0730 02:49:29.038806 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0730 02:49:29.038856 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958.pem (1338 bytes)
	W0730 02:49:29.038901 1658870 certs.go:480] ignoring /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958_empty.pem, impossibly tiny 0 bytes
	I0730 02:49:29.038915 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 02:49:29.038941 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem (1078 bytes)
	I0730 02:49:29.038968 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem (1123 bytes)
	I0730 02:49:29.038994 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem (1675 bytes)
	I0730 02:49:29.039040 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem (1708 bytes)
	I0730 02:49:29.039074 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:49:29.039091 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958.pem -> /usr/share/ca-certificates/1597958.pem
	I0730 02:49:29.039104 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem -> /usr/share/ca-certificates/15979582.pem
	I0730 02:49:29.039165 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:49:29.065694 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38943 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542/id_rsa Username:docker}
	I0730 02:49:29.156328 1658870 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0730 02:49:29.166510 1658870 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0730 02:49:29.179083 1658870 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0730 02:49:29.183445 1658870 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0730 02:49:29.195817 1658870 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0730 02:49:29.199672 1658870 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0730 02:49:29.211934 1658870 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0730 02:49:29.215593 1658870 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0730 02:49:29.227785 1658870 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0730 02:49:29.231435 1658870 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0730 02:49:29.244099 1658870 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0730 02:49:29.259716 1658870 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0730 02:49:29.293782 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 02:49:29.343326 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0730 02:49:29.404319 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 02:49:29.465446 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0730 02:49:29.500058 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0730 02:49:29.528872 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0730 02:49:29.556016 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0730 02:49:29.599083 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0730 02:49:29.641324 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 02:49:29.682372 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958.pem --> /usr/share/ca-certificates/1597958.pem (1338 bytes)
	I0730 02:49:29.734704 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem --> /usr/share/ca-certificates/15979582.pem (1708 bytes)
	I0730 02:49:29.762360 1658870 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0730 02:49:29.784742 1658870 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0730 02:49:29.810847 1658870 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0730 02:49:29.837638 1658870 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0730 02:49:29.858030 1658870 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0730 02:49:29.885399 1658870 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0730 02:49:29.909226 1658870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0730 02:49:29.943524 1658870 ssh_runner.go:195] Run: openssl version
	I0730 02:49:29.949223 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 02:49:29.961606 1658870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:49:29.965451 1658870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:49:29.965563 1658870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:49:29.976598 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 02:49:29.990765 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1597958.pem && ln -fs /usr/share/ca-certificates/1597958.pem /etc/ssl/certs/1597958.pem"
	I0730 02:49:30.005876 1658870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1597958.pem
	I0730 02:49:30.010677 1658870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 02:37 /usr/share/ca-certificates/1597958.pem
	I0730 02:49:30.010786 1658870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1597958.pem
	I0730 02:49:30.023631 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1597958.pem /etc/ssl/certs/51391683.0"
	I0730 02:49:30.037761 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15979582.pem && ln -fs /usr/share/ca-certificates/15979582.pem /etc/ssl/certs/15979582.pem"
	I0730 02:49:30.057194 1658870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15979582.pem
	I0730 02:49:30.062094 1658870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 02:37 /usr/share/ca-certificates/15979582.pem
	I0730 02:49:30.062199 1658870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15979582.pem
	I0730 02:49:30.070923 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15979582.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 02:49:30.082539 1658870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 02:49:30.087254 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0730 02:49:30.095636 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0730 02:49:30.104743 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0730 02:49:30.113443 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0730 02:49:30.122265 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0730 02:49:30.130334 1658870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0730 02:49:30.139299 1658870 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.30.3 crio true true} ...
	I0730 02:49:30.139459 1658870 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-642542-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-642542 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 02:49:30.139511 1658870 kube-vip.go:115] generating kube-vip config ...
	I0730 02:49:30.139588 1658870 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0730 02:49:30.156125 1658870 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0730 02:49:30.156250 1658870 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0730 02:49:30.156355 1658870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 02:49:30.166980 1658870 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 02:49:30.167088 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0730 02:49:30.176681 1658870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0730 02:49:30.197885 1658870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 02:49:30.216879 1658870 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0730 02:49:30.236105 1658870 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0730 02:49:30.239769 1658870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 02:49:30.250504 1658870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:49:30.372799 1658870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 02:49:30.385512 1658870 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0730 02:49:30.385797 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:49:30.391584 1658870 out.go:177] * Verifying Kubernetes components...
	I0730 02:49:30.393584 1658870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:49:30.507154 1658870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 02:49:30.520413 1658870 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:49:30.520697 1658870 kapi.go:59] client config for ha-642542: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/client.crt", KeyFile:"/home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/client.key", CAFile:"/home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17a5cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0730 02:49:30.520761 1658870 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0730 02:49:30.520989 1658870 node_ready.go:35] waiting up to 6m0s for node "ha-642542-m02" to be "Ready" ...
	I0730 02:49:30.521070 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:30.521082 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:30.521090 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:30.521095 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:41.700665 1658870 round_trippers.go:574] Response Status: 500 Internal Server Error in 11179 milliseconds
	I0730 02:49:41.702608 1658870 node_ready.go:53] error getting node "ha-642542-m02": etcdserver: request timed out
	I0730 02:49:41.702688 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:41.702694 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:41.702706 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:41.702711 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.481364 1658870 round_trippers.go:574] Response Status: 500 Internal Server Error in 7778 milliseconds
	I0730 02:49:49.481642 1658870 node_ready.go:53] error getting node "ha-642542-m02": etcdserver: leader changed
	I0730 02:49:49.481701 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:49.481707 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.481714 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.481720 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.503007 1658870 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0730 02:49:49.504368 1658870 node_ready.go:49] node "ha-642542-m02" has status "Ready":"True"
	I0730 02:49:49.504390 1658870 node_ready.go:38] duration metric: took 18.983383986s for node "ha-642542-m02" to be "Ready" ...
	I0730 02:49:49.504401 1658870 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 02:49:49.504513 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0730 02:49:49.504519 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.504527 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.504530 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.530736 1658870 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0730 02:49:49.558803 1658870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:49.561687 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:49:49.561743 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.561766 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.561788 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.573769 1658870 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0730 02:49:49.574975 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:49.575033 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.575055 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.575076 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.577732 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:49.578849 1658870 pod_ready.go:92] pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:49.578898 1658870 pod_ready.go:81] duration metric: took 17.358815ms for pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:49.578932 1658870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7vr5f" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:49.579027 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7vr5f
	I0730 02:49:49.579061 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.579083 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.579104 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.581727 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:49.582992 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:49.583039 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.583078 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.583103 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.585708 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:49.586641 1658870 pod_ready.go:92] pod "coredns-7db6d8ff4d-7vr5f" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:49.586702 1658870 pod_ready.go:81] duration metric: took 7.746658ms for pod "coredns-7db6d8ff4d-7vr5f" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:49.586745 1658870 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:49.586846 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-642542
	I0730 02:49:49.586872 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.586912 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.586933 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.589334 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:49.590407 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:49.590450 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.590486 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.590508 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.593007 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:49.593627 1658870 pod_ready.go:92] pod "etcd-ha-642542" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:49.593676 1658870 pod_ready.go:81] duration metric: took 6.905637ms for pod "etcd-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:49.593701 1658870 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:49.593790 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-642542-m02
	I0730 02:49:49.593824 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.593845 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.593866 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.596321 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:49.597419 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:49.597457 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.597476 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.597511 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.599977 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:49.600558 1658870 pod_ready.go:92] pod "etcd-ha-642542-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:49.600614 1658870 pod_ready.go:81] duration metric: took 6.892566ms for pod "etcd-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:49.600640 1658870 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:49.681928 1658870 request.go:629] Waited for 81.192666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-642542-m03
	I0730 02:49:49.682045 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-642542-m03
	I0730 02:49:49.682093 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.682122 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.682183 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.686421 1658870 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 02:49:49.882418 1658870 request.go:629] Waited for 195.279622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m03
	I0730 02:49:49.882529 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m03
	I0730 02:49:49.882590 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:49.882616 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:49.882637 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:49.889393 1658870 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0730 02:49:49.889686 1658870 pod_ready.go:97] node "ha-642542-m03" hosting pod "etcd-ha-642542-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-642542-m03": nodes "ha-642542-m03" not found
	I0730 02:49:49.889730 1658870 pod_ready.go:81] duration metric: took 289.06916ms for pod "etcd-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	E0730 02:49:49.889769 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542-m03" hosting pod "etcd-ha-642542-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-642542-m03": nodes "ha-642542-m03" not found
	I0730 02:49:49.889807 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:50.081947 1658870 request.go:629] Waited for 192.043706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542
	I0730 02:49:50.082062 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542
	I0730 02:49:50.082099 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:50.082125 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:50.082145 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:50.091312 1658870 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0730 02:49:50.282054 1658870 request.go:629] Waited for 189.443096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:50.282138 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:50.282149 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:50.282156 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:50.282164 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:50.288481 1658870 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 02:49:50.289428 1658870 pod_ready.go:92] pod "kube-apiserver-ha-642542" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:50.289450 1658870 pod_ready.go:81] duration metric: took 399.604941ms for pod "kube-apiserver-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:50.289465 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:50.481797 1658870 request.go:629] Waited for 192.264238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542-m02
	I0730 02:49:50.481876 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542-m02
	I0730 02:49:50.481887 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:50.481906 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:50.481916 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:50.526535 1658870 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I0730 02:49:50.682537 1658870 request.go:629] Waited for 143.266718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:50.682624 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:50.682652 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:50.682667 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:50.682674 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:50.685871 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:49:50.686883 1658870 pod_ready.go:92] pod "kube-apiserver-ha-642542-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:50.686906 1658870 pod_ready.go:81] duration metric: took 397.427566ms for pod "kube-apiserver-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:50.686951 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:50.882220 1658870 request.go:629] Waited for 195.18616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542-m03
	I0730 02:49:50.882442 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542-m03
	I0730 02:49:50.882469 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:50.882499 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:50.882528 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:50.887853 1658870 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 02:49:51.081818 1658870 request.go:629] Waited for 192.166181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m03
	I0730 02:49:51.081977 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m03
	I0730 02:49:51.082013 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:51.082038 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:51.082057 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:51.084968 1658870 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0730 02:49:51.085348 1658870 pod_ready.go:97] node "ha-642542-m03" hosting pod "kube-apiserver-ha-642542-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-642542-m03": nodes "ha-642542-m03" not found
	I0730 02:49:51.085404 1658870 pod_ready.go:81] duration metric: took 398.437233ms for pod "kube-apiserver-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	E0730 02:49:51.085429 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542-m03" hosting pod "kube-apiserver-ha-642542-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-642542-m03": nodes "ha-642542-m03" not found
	I0730 02:49:51.085469 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:51.282002 1658870 request.go:629] Waited for 196.423989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542
	I0730 02:49:51.282066 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542
	I0730 02:49:51.282079 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:51.282097 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:51.282108 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:51.285194 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:49:51.482456 1658870 request.go:629] Waited for 196.347815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:51.482513 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:51.482519 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:51.482528 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:51.482535 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:51.485118 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:51.485847 1658870 pod_ready.go:92] pod "kube-controller-manager-ha-642542" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:51.485869 1658870 pod_ready.go:81] duration metric: took 400.372194ms for pod "kube-controller-manager-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:51.485881 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:51.681751 1658870 request.go:629] Waited for 195.795557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542-m02
	I0730 02:49:51.681845 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542-m02
	I0730 02:49:51.681857 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:51.681867 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:51.681872 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:51.684675 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:51.881771 1658870 request.go:629] Waited for 196.27589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:51.881854 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:51.881865 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:51.881875 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:51.881882 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:51.885149 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:49:51.885661 1658870 pod_ready.go:92] pod "kube-controller-manager-ha-642542-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:51.885679 1658870 pod_ready.go:81] duration metric: took 399.790653ms for pod "kube-controller-manager-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:51.885692 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:52.082048 1658870 request.go:629] Waited for 196.291373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542-m03
	I0730 02:49:52.082120 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542-m03
	I0730 02:49:52.082132 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:52.082142 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:52.082152 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:52.091502 1658870 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0730 02:49:52.281803 1658870 request.go:629] Waited for 189.196621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m03
	I0730 02:49:52.281874 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m03
	I0730 02:49:52.281883 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:52.281893 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:52.281900 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:52.284418 1658870 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0730 02:49:52.284881 1658870 pod_ready.go:97] node "ha-642542-m03" hosting pod "kube-controller-manager-ha-642542-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-642542-m03": nodes "ha-642542-m03" not found
	I0730 02:49:52.284910 1658870 pod_ready.go:81] duration metric: took 399.210385ms for pod "kube-controller-manager-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	E0730 02:49:52.284921 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542-m03" hosting pod "kube-controller-manager-ha-642542-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-642542-m03": nodes "ha-642542-m03" not found
	I0730 02:49:52.284929 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-72lmf" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:52.482186 1658870 request.go:629] Waited for 197.18763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72lmf
	I0730 02:49:52.482328 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72lmf
	I0730 02:49:52.482375 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:52.482408 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:52.482441 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:52.486161 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:49:52.682788 1658870 request.go:629] Waited for 195.288862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:52.682933 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:52.682942 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:52.682951 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:52.682955 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:52.687130 1658870 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 02:49:52.690977 1658870 pod_ready.go:92] pod "kube-proxy-72lmf" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:52.691071 1658870 pod_ready.go:81] duration metric: took 406.119928ms for pod "kube-proxy-72lmf" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:52.691149 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7rrfn" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:52.882528 1658870 request.go:629] Waited for 191.255903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrfn
	I0730 02:49:52.882638 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrfn
	I0730 02:49:52.882672 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:52.882699 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:52.882719 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:52.888579 1658870 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 02:49:53.082444 1658870 request.go:629] Waited for 193.142757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m04
	I0730 02:49:53.082555 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m04
	I0730 02:49:53.082622 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:53.082650 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:53.082667 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:53.085329 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:53.086457 1658870 pod_ready.go:92] pod "kube-proxy-7rrfn" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:53.086508 1658870 pod_ready.go:81] duration metric: took 395.330258ms for pod "kube-proxy-7rrfn" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:53.086542 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7txb9" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:53.282431 1658870 request.go:629] Waited for 195.795501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7txb9
	I0730 02:49:53.282547 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7txb9
	I0730 02:49:53.282584 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:53.282613 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:53.282635 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:53.285479 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:53.482361 1658870 request.go:629] Waited for 196.162984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m03
	I0730 02:49:53.482509 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m03
	I0730 02:49:53.482544 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:53.482577 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:53.482597 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:53.485135 1658870 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0730 02:49:53.485453 1658870 pod_ready.go:97] node "ha-642542-m03" hosting pod "kube-proxy-7txb9" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-642542-m03": nodes "ha-642542-m03" not found
	I0730 02:49:53.485493 1658870 pod_ready.go:81] duration metric: took 398.931435ms for pod "kube-proxy-7txb9" in "kube-system" namespace to be "Ready" ...
	E0730 02:49:53.485517 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542-m03" hosting pod "kube-proxy-7txb9" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-642542-m03": nodes "ha-642542-m03" not found
	I0730 02:49:53.485549 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqcsg" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:53.682010 1658870 request.go:629] Waited for 196.356964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqcsg
	I0730 02:49:53.682135 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqcsg
	I0730 02:49:53.682176 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:53.682202 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:53.682221 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:53.686413 1658870 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 02:49:53.881796 1658870 request.go:629] Waited for 194.267825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:53.881966 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:53.881991 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:53.882029 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:53.882054 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:53.887601 1658870 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 02:49:53.888723 1658870 pod_ready.go:92] pod "kube-proxy-bqcsg" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:53.888788 1658870 pod_ready.go:81] duration metric: took 403.211125ms for pod "kube-proxy-bqcsg" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:53.888817 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:54.082754 1658870 request.go:629] Waited for 193.840989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542
	I0730 02:49:54.082869 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542
	I0730 02:49:54.082906 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:54.082934 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:54.082954 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:54.085979 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:54.282059 1658870 request.go:629] Waited for 195.323423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:54.282189 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:49:54.282214 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:54.282238 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:54.282273 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:54.284953 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:54.285589 1658870 pod_ready.go:92] pod "kube-scheduler-ha-642542" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:54.285640 1658870 pod_ready.go:81] duration metric: took 396.801469ms for pod "kube-scheduler-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:54.285668 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:54.482448 1658870 request.go:629] Waited for 196.681953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542-m02
	I0730 02:49:54.482622 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542-m02
	I0730 02:49:54.482663 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:54.482691 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:54.482712 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:54.485529 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:54.682175 1658870 request.go:629] Waited for 195.246871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:54.691139 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:49:54.691158 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:54.691173 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:54.691178 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:54.697316 1658870 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 02:49:54.698367 1658870 pod_ready.go:92] pod "kube-scheduler-ha-642542-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 02:49:54.698422 1658870 pod_ready.go:81] duration metric: took 412.732799ms for pod "kube-scheduler-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:54.698463 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	I0730 02:49:54.882340 1658870 request.go:629] Waited for 183.787777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542-m03
	I0730 02:49:54.882474 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542-m03
	I0730 02:49:54.882485 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:54.882494 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:54.882499 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:54.885226 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:49:55.082142 1658870 request.go:629] Waited for 196.34894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m03
	I0730 02:49:55.082286 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m03
	I0730 02:49:55.082301 1658870 round_trippers.go:469] Request Headers:
	I0730 02:49:55.082309 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:49:55.082322 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:49:55.085881 1658870 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0730 02:49:55.086032 1658870 pod_ready.go:97] node "ha-642542-m03" hosting pod "kube-scheduler-ha-642542-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-642542-m03": nodes "ha-642542-m03" not found
	I0730 02:49:55.086052 1658870 pod_ready.go:81] duration metric: took 387.564237ms for pod "kube-scheduler-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	E0730 02:49:55.086068 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542-m03" hosting pod "kube-scheduler-ha-642542-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-642542-m03": nodes "ha-642542-m03" not found
	I0730 02:49:55.086078 1658870 pod_ready.go:38] duration metric: took 5.581666302s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 02:49:55.086107 1658870 api_server.go:52] waiting for apiserver process to appear ...
	I0730 02:49:55.086189 1658870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 02:49:55.098826 1658870 api_server.go:72] duration metric: took 24.712951182s to wait for apiserver process to appear ...
	I0730 02:49:55.098854 1658870 api_server.go:88] waiting for apiserver healthz status ...
	I0730 02:49:55.098876 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:49:55.107527 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:49:55.107558 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:49:55.599366 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:49:55.608511 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:49:55.608613 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:49:56.099022 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:49:56.107180 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:49:56.107208 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:49:56.599795 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:49:56.609056 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:49:56.609092 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:49:57.099778 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:49:57.110507 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:49:57.110581 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:49:57.599167 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:49:57.606809 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:49:57.606837 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:49:58.099430 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:49:58.109510 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:49:58.109554 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:49:58.599010 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:49:58.606866 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:49:58.606896 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:49:59.098992 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:49:59.107181 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:49:59.107207 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:49:59.599841 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:49:59.615648 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:49:59.615689 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:00.130764 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:00.156831 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:00.156871 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:00.599669 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:00.607555 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:00.607590 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:01.099021 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:01.107141 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:01.107195 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:01.599505 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:01.607356 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:01.607385 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:02.100010 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:02.107769 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:02.107801 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:02.599521 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:02.607224 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:02.607251 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:03.099820 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:03.107599 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:03.107632 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:03.599264 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:03.607886 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:03.607914 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:04.099378 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:04.108354 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:04.108387 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:04.599414 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:04.607395 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:04.607430 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:05.098957 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:05.106918 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:05.106959 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:05.599509 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:05.607219 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:05.607245 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:06.099902 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:06.107575 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:06.107603 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:06.599106 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:06.606545 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:06.606576 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:07.098990 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:07.106635 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:07.106662 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:07.599158 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:07.607076 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:07.607104 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:08.099825 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:08.107535 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:08.107567 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:08.599015 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:08.606692 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:08.606736 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:09.099000 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:09.106598 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:09.106622 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:09.599764 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:09.607906 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:09.607934 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:10.099711 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:10.107935 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:10.108026 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:10.599562 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:10.607189 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:10.607219 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:11.099815 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:11.108206 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:11.108238 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:11.599915 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:11.608098 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:11.608140 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:12.099470 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:12.107841 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:12.107870 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:12.599373 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:12.606983 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:12.607011 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:13.099462 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:13.118037 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:13.118067 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:13.599712 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:13.607403 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:13.607440 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:14.098954 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:14.109991 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:14.110021 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:14.599530 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:14.619780 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:14.619807 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:15.099093 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:15.114623 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:15.114684 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:15.599007 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:15.607210 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:15.607254 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:16.099892 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:16.109272 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:16.109304 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:16.599949 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:16.620279 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:16.620322 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:17.099069 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:17.106908 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:17.106941 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:17.599726 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:17.609282 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:17.609315 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:18.099906 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:18.107663 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:18.107704 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:18.599023 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:18.606886 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:18.606913 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:19.099111 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:19.106890 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:19.106929 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:19.599607 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:19.607474 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:19.607516 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:20.099017 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:20.106883 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:20.106923 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:20.599195 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:20.608664 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:20.608698 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:21.099138 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:21.106648 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:21.106673 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:21.599018 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:21.606620 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:21.606654 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:22.099996 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:22.110543 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:22.110584 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:22.599787 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:22.608319 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:22.608354 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:23.099706 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:23.108362 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:23.108446 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:23.599895 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:23.607615 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:23.607642 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:24.099029 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:24.106698 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:24.106734 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:24.599419 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:24.607059 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:24.607089 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:25.099296 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:25.107585 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:25.107614 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:25.599023 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:25.606580 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:25.606612 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:26.099016 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:26.106938 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:26.106969 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:26.599657 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:26.607648 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:26.607677 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:27.099000 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:27.106673 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:27.106705 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:27.599016 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:27.606545 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:27.606572 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:28.099143 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:28.107195 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:28.107233 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:28.599853 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:28.607890 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:28.607932 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:29.099232 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:29.108433 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:29.108460 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:29.599437 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:29.607343 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0730 02:50:29.607373 1658870 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0730 02:50:30.099758 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:30.100309 1658870 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0730 02:50:30.598964 1658870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 02:50:30.599064 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 02:50:30.658667 1658870 cri.go:89] found id: "3e49aa1354e08d838cde0f2ecec59e1fc43107085d086e7aa959ebd98b30d950"
	I0730 02:50:30.658690 1658870 cri.go:89] found id: "3b9f4ffd6b8f9c88e15c3615df09ed6ad7a6568a9f954ab283d3e35a7b1aac34"
	I0730 02:50:30.658696 1658870 cri.go:89] found id: ""
	I0730 02:50:30.658703 1658870 logs.go:276] 2 containers: [3e49aa1354e08d838cde0f2ecec59e1fc43107085d086e7aa959ebd98b30d950 3b9f4ffd6b8f9c88e15c3615df09ed6ad7a6568a9f954ab283d3e35a7b1aac34]
	I0730 02:50:30.658765 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:30.663719 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:30.668102 1658870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 02:50:30.668183 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 02:50:30.734665 1658870 cri.go:89] found id: "c38bbfc715d0053c0b3be7ffae0f1e1249f11bdff0ae95734050917ced56590a"
	I0730 02:50:30.734685 1658870 cri.go:89] found id: "c0e0da8ebd055cc753d129e41b0ca4c5e6098f8207d8a1eecc2a86def84cb9a5"
	I0730 02:50:30.734691 1658870 cri.go:89] found id: ""
	I0730 02:50:30.734697 1658870 logs.go:276] 2 containers: [c38bbfc715d0053c0b3be7ffae0f1e1249f11bdff0ae95734050917ced56590a c0e0da8ebd055cc753d129e41b0ca4c5e6098f8207d8a1eecc2a86def84cb9a5]
	I0730 02:50:30.734769 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:30.739118 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:30.743010 1658870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 02:50:30.743081 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 02:50:30.795863 1658870 cri.go:89] found id: ""
	I0730 02:50:30.795952 1658870 logs.go:276] 0 containers: []
	W0730 02:50:30.795998 1658870 logs.go:278] No container was found matching "coredns"
	I0730 02:50:30.796031 1658870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 02:50:30.796126 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 02:50:30.851758 1658870 cri.go:89] found id: "305c2bf715e9fab7cbbe3f47b60a787e6fc7a3c811ec483a09e7f8c920f48bf6"
	I0730 02:50:30.851831 1658870 cri.go:89] found id: "4b410334dde56a9280c3bae4dcd918d40051df59ce78eafddfab8b84edf1bb36"
	I0730 02:50:30.851851 1658870 cri.go:89] found id: ""
	I0730 02:50:30.851873 1658870 logs.go:276] 2 containers: [305c2bf715e9fab7cbbe3f47b60a787e6fc7a3c811ec483a09e7f8c920f48bf6 4b410334dde56a9280c3bae4dcd918d40051df59ce78eafddfab8b84edf1bb36]
	I0730 02:50:30.851989 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:30.856226 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:30.860002 1658870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 02:50:30.860071 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 02:50:30.922798 1658870 cri.go:89] found id: ""
	I0730 02:50:30.922819 1658870 logs.go:276] 0 containers: []
	W0730 02:50:30.922828 1658870 logs.go:278] No container was found matching "kube-proxy"
	I0730 02:50:30.922835 1658870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 02:50:30.922893 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 02:50:30.985009 1658870 cri.go:89] found id: "6e0a181f100f47bdc01d441b499800b8fc277726145a414939c76c2134a3ac60"
	I0730 02:50:30.985040 1658870 cri.go:89] found id: "2565d85e5a57e25f5783ad61dc6c1546175593eaa1cf0659169c6ff94a5b4213"
	I0730 02:50:30.985045 1658870 cri.go:89] found id: ""
	I0730 02:50:30.985052 1658870 logs.go:276] 2 containers: [6e0a181f100f47bdc01d441b499800b8fc277726145a414939c76c2134a3ac60 2565d85e5a57e25f5783ad61dc6c1546175593eaa1cf0659169c6ff94a5b4213]
	I0730 02:50:30.985111 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:30.989377 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:30.993233 1658870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 02:50:30.993302 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 02:50:31.062896 1658870 cri.go:89] found id: ""
	I0730 02:50:31.062919 1658870 logs.go:276] 0 containers: []
	W0730 02:50:31.062927 1658870 logs.go:278] No container was found matching "kindnet"
	I0730 02:50:31.062938 1658870 logs.go:123] Gathering logs for etcd [c0e0da8ebd055cc753d129e41b0ca4c5e6098f8207d8a1eecc2a86def84cb9a5] ...
	I0730 02:50:31.062952 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e0da8ebd055cc753d129e41b0ca4c5e6098f8207d8a1eecc2a86def84cb9a5"
	I0730 02:50:31.158636 1658870 logs.go:123] Gathering logs for kube-scheduler [305c2bf715e9fab7cbbe3f47b60a787e6fc7a3c811ec483a09e7f8c920f48bf6] ...
	I0730 02:50:31.158718 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 305c2bf715e9fab7cbbe3f47b60a787e6fc7a3c811ec483a09e7f8c920f48bf6"
	I0730 02:50:31.256558 1658870 logs.go:123] Gathering logs for kube-scheduler [4b410334dde56a9280c3bae4dcd918d40051df59ce78eafddfab8b84edf1bb36] ...
	I0730 02:50:31.256686 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b410334dde56a9280c3bae4dcd918d40051df59ce78eafddfab8b84edf1bb36"
	I0730 02:50:31.313529 1658870 logs.go:123] Gathering logs for kube-controller-manager [6e0a181f100f47bdc01d441b499800b8fc277726145a414939c76c2134a3ac60] ...
	I0730 02:50:31.313556 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e0a181f100f47bdc01d441b499800b8fc277726145a414939c76c2134a3ac60"
	I0730 02:50:31.396765 1658870 logs.go:123] Gathering logs for describe nodes ...
	I0730 02:50:31.396844 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 02:50:31.856896 1658870 logs.go:123] Gathering logs for kube-apiserver [3e49aa1354e08d838cde0f2ecec59e1fc43107085d086e7aa959ebd98b30d950] ...
	I0730 02:50:31.857422 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e49aa1354e08d838cde0f2ecec59e1fc43107085d086e7aa959ebd98b30d950"
	I0730 02:50:31.921269 1658870 logs.go:123] Gathering logs for kube-apiserver [3b9f4ffd6b8f9c88e15c3615df09ed6ad7a6568a9f954ab283d3e35a7b1aac34] ...
	I0730 02:50:31.921340 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b9f4ffd6b8f9c88e15c3615df09ed6ad7a6568a9f954ab283d3e35a7b1aac34"
	I0730 02:50:31.970809 1658870 logs.go:123] Gathering logs for kube-controller-manager [2565d85e5a57e25f5783ad61dc6c1546175593eaa1cf0659169c6ff94a5b4213] ...
	I0730 02:50:31.970879 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2565d85e5a57e25f5783ad61dc6c1546175593eaa1cf0659169c6ff94a5b4213"
	I0730 02:50:32.030349 1658870 logs.go:123] Gathering logs for CRI-O ...
	I0730 02:50:32.030417 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 02:50:32.106921 1658870 logs.go:123] Gathering logs for container status ...
	I0730 02:50:32.106993 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 02:50:32.186480 1658870 logs.go:123] Gathering logs for kubelet ...
	I0730 02:50:32.186559 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 02:50:32.278615 1658870 logs.go:123] Gathering logs for dmesg ...
	I0730 02:50:32.278741 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 02:50:32.302249 1658870 logs.go:123] Gathering logs for etcd [c38bbfc715d0053c0b3be7ffae0f1e1249f11bdff0ae95734050917ced56590a] ...
	I0730 02:50:32.302363 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c38bbfc715d0053c0b3be7ffae0f1e1249f11bdff0ae95734050917ced56590a"
	I0730 02:50:34.919589 1658870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0730 02:50:34.927496 1658870 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0730 02:50:34.927565 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0730 02:50:34.927578 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:34.927589 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:34.927603 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:34.940425 1658870 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0730 02:50:34.940527 1658870 api_server.go:141] control plane version: v1.30.3
	I0730 02:50:34.940547 1658870 api_server.go:131] duration metric: took 39.841685192s to wait for apiserver health ...
	I0730 02:50:34.940555 1658870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0730 02:50:34.940590 1658870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0730 02:50:34.940655 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0730 02:50:34.985911 1658870 cri.go:89] found id: "3e49aa1354e08d838cde0f2ecec59e1fc43107085d086e7aa959ebd98b30d950"
	I0730 02:50:34.985937 1658870 cri.go:89] found id: "3b9f4ffd6b8f9c88e15c3615df09ed6ad7a6568a9f954ab283d3e35a7b1aac34"
	I0730 02:50:34.985942 1658870 cri.go:89] found id: ""
	I0730 02:50:34.985949 1658870 logs.go:276] 2 containers: [3e49aa1354e08d838cde0f2ecec59e1fc43107085d086e7aa959ebd98b30d950 3b9f4ffd6b8f9c88e15c3615df09ed6ad7a6568a9f954ab283d3e35a7b1aac34]
	I0730 02:50:34.986007 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:34.989601 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:34.993005 1658870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0730 02:50:34.993130 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0730 02:50:35.035229 1658870 cri.go:89] found id: "c38bbfc715d0053c0b3be7ffae0f1e1249f11bdff0ae95734050917ced56590a"
	I0730 02:50:35.035251 1658870 cri.go:89] found id: "c0e0da8ebd055cc753d129e41b0ca4c5e6098f8207d8a1eecc2a86def84cb9a5"
	I0730 02:50:35.035256 1658870 cri.go:89] found id: ""
	I0730 02:50:35.035264 1658870 logs.go:276] 2 containers: [c38bbfc715d0053c0b3be7ffae0f1e1249f11bdff0ae95734050917ced56590a c0e0da8ebd055cc753d129e41b0ca4c5e6098f8207d8a1eecc2a86def84cb9a5]
	I0730 02:50:35.035324 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:35.039131 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:35.042779 1658870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0730 02:50:35.042854 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0730 02:50:35.089738 1658870 cri.go:89] found id: ""
	I0730 02:50:35.089770 1658870 logs.go:276] 0 containers: []
	W0730 02:50:35.089781 1658870 logs.go:278] No container was found matching "coredns"
	I0730 02:50:35.089790 1658870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0730 02:50:35.089876 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0730 02:50:35.136750 1658870 cri.go:89] found id: "305c2bf715e9fab7cbbe3f47b60a787e6fc7a3c811ec483a09e7f8c920f48bf6"
	I0730 02:50:35.136837 1658870 cri.go:89] found id: "4b410334dde56a9280c3bae4dcd918d40051df59ce78eafddfab8b84edf1bb36"
	I0730 02:50:35.136850 1658870 cri.go:89] found id: ""
	I0730 02:50:35.136859 1658870 logs.go:276] 2 containers: [305c2bf715e9fab7cbbe3f47b60a787e6fc7a3c811ec483a09e7f8c920f48bf6 4b410334dde56a9280c3bae4dcd918d40051df59ce78eafddfab8b84edf1bb36]
	I0730 02:50:35.136918 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:35.140783 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:35.144661 1658870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0730 02:50:35.144776 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0730 02:50:35.189553 1658870 cri.go:89] found id: ""
	I0730 02:50:35.189582 1658870 logs.go:276] 0 containers: []
	W0730 02:50:35.189591 1658870 logs.go:278] No container was found matching "kube-proxy"
	I0730 02:50:35.189597 1658870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0730 02:50:35.189705 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0730 02:50:35.238218 1658870 cri.go:89] found id: "6e0a181f100f47bdc01d441b499800b8fc277726145a414939c76c2134a3ac60"
	I0730 02:50:35.238258 1658870 cri.go:89] found id: "2565d85e5a57e25f5783ad61dc6c1546175593eaa1cf0659169c6ff94a5b4213"
	I0730 02:50:35.238265 1658870 cri.go:89] found id: ""
	I0730 02:50:35.238272 1658870 logs.go:276] 2 containers: [6e0a181f100f47bdc01d441b499800b8fc277726145a414939c76c2134a3ac60 2565d85e5a57e25f5783ad61dc6c1546175593eaa1cf0659169c6ff94a5b4213]
	I0730 02:50:35.238338 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:35.241950 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:35.245757 1658870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0730 02:50:35.245828 1658870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0730 02:50:35.285752 1658870 cri.go:89] found id: ""
	I0730 02:50:35.285776 1658870 logs.go:276] 0 containers: []
	W0730 02:50:35.285785 1658870 logs.go:278] No container was found matching "kindnet"
	I0730 02:50:35.285793 1658870 logs.go:123] Gathering logs for dmesg ...
	I0730 02:50:35.285807 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0730 02:50:35.308252 1658870 logs.go:123] Gathering logs for describe nodes ...
	I0730 02:50:35.308328 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0730 02:50:35.568740 1658870 logs.go:123] Gathering logs for kube-apiserver [3e49aa1354e08d838cde0f2ecec59e1fc43107085d086e7aa959ebd98b30d950] ...
	I0730 02:50:35.568802 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e49aa1354e08d838cde0f2ecec59e1fc43107085d086e7aa959ebd98b30d950"
	I0730 02:50:35.614697 1658870 logs.go:123] Gathering logs for kube-apiserver [3b9f4ffd6b8f9c88e15c3615df09ed6ad7a6568a9f954ab283d3e35a7b1aac34] ...
	I0730 02:50:35.614733 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b9f4ffd6b8f9c88e15c3615df09ed6ad7a6568a9f954ab283d3e35a7b1aac34"
	I0730 02:50:35.655483 1658870 logs.go:123] Gathering logs for kube-scheduler [305c2bf715e9fab7cbbe3f47b60a787e6fc7a3c811ec483a09e7f8c920f48bf6] ...
	I0730 02:50:35.655516 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 305c2bf715e9fab7cbbe3f47b60a787e6fc7a3c811ec483a09e7f8c920f48bf6"
	I0730 02:50:35.716934 1658870 logs.go:123] Gathering logs for kube-scheduler [4b410334dde56a9280c3bae4dcd918d40051df59ce78eafddfab8b84edf1bb36] ...
	I0730 02:50:35.716976 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b410334dde56a9280c3bae4dcd918d40051df59ce78eafddfab8b84edf1bb36"
	I0730 02:50:35.755283 1658870 logs.go:123] Gathering logs for kube-controller-manager [2565d85e5a57e25f5783ad61dc6c1546175593eaa1cf0659169c6ff94a5b4213] ...
	I0730 02:50:35.755313 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2565d85e5a57e25f5783ad61dc6c1546175593eaa1cf0659169c6ff94a5b4213"
	I0730 02:50:35.791320 1658870 logs.go:123] Gathering logs for kubelet ...
	I0730 02:50:35.791351 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0730 02:50:35.874011 1658870 logs.go:123] Gathering logs for etcd [c38bbfc715d0053c0b3be7ffae0f1e1249f11bdff0ae95734050917ced56590a] ...
	I0730 02:50:35.874045 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c38bbfc715d0053c0b3be7ffae0f1e1249f11bdff0ae95734050917ced56590a"
	I0730 02:50:35.934428 1658870 logs.go:123] Gathering logs for etcd [c0e0da8ebd055cc753d129e41b0ca4c5e6098f8207d8a1eecc2a86def84cb9a5] ...
	I0730 02:50:35.934460 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e0da8ebd055cc753d129e41b0ca4c5e6098f8207d8a1eecc2a86def84cb9a5"
	I0730 02:50:35.993765 1658870 logs.go:123] Gathering logs for kube-controller-manager [6e0a181f100f47bdc01d441b499800b8fc277726145a414939c76c2134a3ac60] ...
	I0730 02:50:35.993801 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e0a181f100f47bdc01d441b499800b8fc277726145a414939c76c2134a3ac60"
	I0730 02:50:36.086069 1658870 logs.go:123] Gathering logs for CRI-O ...
	I0730 02:50:36.086106 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0730 02:50:36.161956 1658870 logs.go:123] Gathering logs for container status ...
	I0730 02:50:36.161994 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0730 02:50:38.708056 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0730 02:50:38.708081 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:38.708090 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:38.708095 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:38.722405 1658870 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0730 02:50:38.731149 1658870 system_pods.go:59] 26 kube-system pods found
	I0730 02:50:38.731205 1658870 system_pods.go:61] "coredns-7db6d8ff4d-5shks" [a1fe3992-7955-4a26-b14e-b202c050fe92] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0730 02:50:38.731215 1658870 system_pods.go:61] "coredns-7db6d8ff4d-7vr5f" [94efb330-c89f-438f-a925-171e3b7d2c5a] Running
	I0730 02:50:38.731222 1658870 system_pods.go:61] "etcd-ha-642542" [df0756ab-50e3-4fd5-b849-2be21d896224] Running
	I0730 02:50:38.731251 1658870 system_pods.go:61] "etcd-ha-642542-m02" [0ea5876e-0cc1-4699-b60e-dde465233356] Running
	I0730 02:50:38.731263 1658870 system_pods.go:61] "etcd-ha-642542-m03" [ca86de58-8132-42c5-b15a-e7979bdac498] Running
	I0730 02:50:38.731268 1658870 system_pods.go:61] "kindnet-48qbs" [39e31702-721e-407b-986a-8fab2fe77f60] Running
	I0730 02:50:38.731273 1658870 system_pods.go:61] "kindnet-bbnnt" [b15d30d7-cc7e-410c-9868-bcc471119ff2] Running
	I0730 02:50:38.731277 1658870 system_pods.go:61] "kindnet-d8j9f" [fac351a2-7b1f-475f-95ab-184a82371cb6] Running
	I0730 02:50:38.731283 1658870 system_pods.go:61] "kindnet-lsdrr" [620a6d8f-bf0b-4bee-a96e-3e505ac6e27b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0730 02:50:38.731295 1658870 system_pods.go:61] "kube-apiserver-ha-642542" [889d9de0-e8ee-45ac-bff7-31d88f86187b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0730 02:50:38.731303 1658870 system_pods.go:61] "kube-apiserver-ha-642542-m02" [8112a12e-40d7-4d67-96d5-8416c366a75d] Running
	I0730 02:50:38.731326 1658870 system_pods.go:61] "kube-apiserver-ha-642542-m03" [807039a3-0ea6-4eec-9f96-760abb6ef492] Running
	I0730 02:50:38.731341 1658870 system_pods.go:61] "kube-controller-manager-ha-642542" [fcdfb668-fe28-49ba-af3e-2f0fde94ef9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0730 02:50:38.731346 1658870 system_pods.go:61] "kube-controller-manager-ha-642542-m02" [5cb9623c-a505-415c-ba15-0219a2b42b44] Running
	I0730 02:50:38.731357 1658870 system_pods.go:61] "kube-controller-manager-ha-642542-m03" [d73eb497-2727-4622-bbd4-71fc0077c270] Running
	I0730 02:50:38.731361 1658870 system_pods.go:61] "kube-proxy-72lmf" [3460e066-2e44-44f9-9dba-b6ceef376238] Running
	I0730 02:50:38.731366 1658870 system_pods.go:61] "kube-proxy-7rrfn" [f3dc025f-9487-43da-902f-b4fd9c9cec92] Running
	I0730 02:50:38.731370 1658870 system_pods.go:61] "kube-proxy-7txb9" [3ebe947a-5004-4e1d-93b6-f918513fde88] Running
	I0730 02:50:38.731375 1658870 system_pods.go:61] "kube-proxy-bqcsg" [56499320-63fa-49df-bcec-491600de4a24] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0730 02:50:38.731385 1658870 system_pods.go:61] "kube-scheduler-ha-642542" [f5333088-fc91-43dc-a2ac-d5f824e8adfe] Running
	I0730 02:50:38.731402 1658870 system_pods.go:61] "kube-scheduler-ha-642542-m02" [024a1f3e-816b-44b7-ad66-92330ee2a0af] Running
	I0730 02:50:38.731414 1658870 system_pods.go:61] "kube-scheduler-ha-642542-m03" [c212a701-43fd-4f2a-b4be-33993f040c84] Running
	I0730 02:50:38.731418 1658870 system_pods.go:61] "kube-vip-ha-642542" [a10776e5-7329-43a8-b0ee-4b94a8f4daa3] Running
	I0730 02:50:38.731422 1658870 system_pods.go:61] "kube-vip-ha-642542-m02" [b6d69903-de74-4aed-bd83-99626c6f62b1] Running
	I0730 02:50:38.731473 1658870 system_pods.go:61] "kube-vip-ha-642542-m03" [bf673688-6a34-4c29-9dee-955637b1b92c] Running
	I0730 02:50:38.731478 1658870 system_pods.go:61] "storage-provisioner" [f02271e7-2b20-4de1-acd7-e05ac977cc58] Running
	I0730 02:50:38.731484 1658870 system_pods.go:74] duration metric: took 3.790919516s to wait for pod list to return data ...
	I0730 02:50:38.731496 1658870 default_sa.go:34] waiting for default service account to be created ...
	I0730 02:50:38.731600 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0730 02:50:38.731613 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:38.731622 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:38.731639 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:38.734512 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:38.734753 1658870 default_sa.go:45] found service account: "default"
	I0730 02:50:38.734771 1658870 default_sa.go:55] duration metric: took 3.269238ms for default service account to be created ...
	I0730 02:50:38.734781 1658870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0730 02:50:38.734836 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0730 02:50:38.734844 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:38.734852 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:38.734864 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:38.741410 1658870 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 02:50:38.751017 1658870 system_pods.go:86] 26 kube-system pods found
	I0730 02:50:38.751059 1658870 system_pods.go:89] "coredns-7db6d8ff4d-5shks" [a1fe3992-7955-4a26-b14e-b202c050fe92] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0730 02:50:38.751068 1658870 system_pods.go:89] "coredns-7db6d8ff4d-7vr5f" [94efb330-c89f-438f-a925-171e3b7d2c5a] Running
	I0730 02:50:38.751174 1658870 system_pods.go:89] "etcd-ha-642542" [df0756ab-50e3-4fd5-b849-2be21d896224] Running
	I0730 02:50:38.751190 1658870 system_pods.go:89] "etcd-ha-642542-m02" [0ea5876e-0cc1-4699-b60e-dde465233356] Running
	I0730 02:50:38.751196 1658870 system_pods.go:89] "etcd-ha-642542-m03" [ca86de58-8132-42c5-b15a-e7979bdac498] Running
	I0730 02:50:38.751201 1658870 system_pods.go:89] "kindnet-48qbs" [39e31702-721e-407b-986a-8fab2fe77f60] Running
	I0730 02:50:38.751205 1658870 system_pods.go:89] "kindnet-bbnnt" [b15d30d7-cc7e-410c-9868-bcc471119ff2] Running
	I0730 02:50:38.751213 1658870 system_pods.go:89] "kindnet-d8j9f" [fac351a2-7b1f-475f-95ab-184a82371cb6] Running
	I0730 02:50:38.751222 1658870 system_pods.go:89] "kindnet-lsdrr" [620a6d8f-bf0b-4bee-a96e-3e505ac6e27b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0730 02:50:38.751258 1658870 system_pods.go:89] "kube-apiserver-ha-642542" [889d9de0-e8ee-45ac-bff7-31d88f86187b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0730 02:50:38.751277 1658870 system_pods.go:89] "kube-apiserver-ha-642542-m02" [8112a12e-40d7-4d67-96d5-8416c366a75d] Running
	I0730 02:50:38.751299 1658870 system_pods.go:89] "kube-apiserver-ha-642542-m03" [807039a3-0ea6-4eec-9f96-760abb6ef492] Running
	I0730 02:50:38.751307 1658870 system_pods.go:89] "kube-controller-manager-ha-642542" [fcdfb668-fe28-49ba-af3e-2f0fde94ef9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0730 02:50:38.751319 1658870 system_pods.go:89] "kube-controller-manager-ha-642542-m02" [5cb9623c-a505-415c-ba15-0219a2b42b44] Running
	I0730 02:50:38.751325 1658870 system_pods.go:89] "kube-controller-manager-ha-642542-m03" [d73eb497-2727-4622-bbd4-71fc0077c270] Running
	I0730 02:50:38.751332 1658870 system_pods.go:89] "kube-proxy-72lmf" [3460e066-2e44-44f9-9dba-b6ceef376238] Running
	I0730 02:50:38.751336 1658870 system_pods.go:89] "kube-proxy-7rrfn" [f3dc025f-9487-43da-902f-b4fd9c9cec92] Running
	I0730 02:50:38.751348 1658870 system_pods.go:89] "kube-proxy-7txb9" [3ebe947a-5004-4e1d-93b6-f918513fde88] Running
	I0730 02:50:38.751354 1658870 system_pods.go:89] "kube-proxy-bqcsg" [56499320-63fa-49df-bcec-491600de4a24] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0730 02:50:38.751368 1658870 system_pods.go:89] "kube-scheduler-ha-642542" [f5333088-fc91-43dc-a2ac-d5f824e8adfe] Running
	I0730 02:50:38.751374 1658870 system_pods.go:89] "kube-scheduler-ha-642542-m02" [024a1f3e-816b-44b7-ad66-92330ee2a0af] Running
	I0730 02:50:38.751379 1658870 system_pods.go:89] "kube-scheduler-ha-642542-m03" [c212a701-43fd-4f2a-b4be-33993f040c84] Running
	I0730 02:50:38.751383 1658870 system_pods.go:89] "kube-vip-ha-642542" [a10776e5-7329-43a8-b0ee-4b94a8f4daa3] Running
	I0730 02:50:38.751391 1658870 system_pods.go:89] "kube-vip-ha-642542-m02" [b6d69903-de74-4aed-bd83-99626c6f62b1] Running
	I0730 02:50:38.751399 1658870 system_pods.go:89] "kube-vip-ha-642542-m03" [bf673688-6a34-4c29-9dee-955637b1b92c] Running
	I0730 02:50:38.751403 1658870 system_pods.go:89] "storage-provisioner" [f02271e7-2b20-4de1-acd7-e05ac977cc58] Running
	I0730 02:50:38.751413 1658870 system_pods.go:126] duration metric: took 16.626116ms to wait for k8s-apps to be running ...
	I0730 02:50:38.751421 1658870 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 02:50:38.751486 1658870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 02:50:38.764168 1658870 system_svc.go:56] duration metric: took 12.737012ms WaitForService to wait for kubelet
	I0730 02:50:38.764197 1658870 kubeadm.go:582] duration metric: took 1m8.378329498s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 02:50:38.764218 1658870 node_conditions.go:102] verifying NodePressure condition ...
	I0730 02:50:38.764287 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0730 02:50:38.764299 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:38.764307 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:38.764310 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:38.767453 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:38.769179 1658870 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0730 02:50:38.769261 1658870 node_conditions.go:123] node cpu capacity is 2
	I0730 02:50:38.769291 1658870 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0730 02:50:38.769311 1658870 node_conditions.go:123] node cpu capacity is 2
	I0730 02:50:38.769355 1658870 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0730 02:50:38.769383 1658870 node_conditions.go:123] node cpu capacity is 2
	I0730 02:50:38.769404 1658870 node_conditions.go:105] duration metric: took 5.179887ms to run NodePressure ...
	I0730 02:50:38.769450 1658870 start.go:241] waiting for startup goroutines ...
	I0730 02:50:38.769492 1658870 start.go:255] writing updated cluster config ...
	I0730 02:50:38.772136 1658870 out.go:177] 
	I0730 02:50:38.774546 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:50:38.774720 1658870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/config.json ...
	I0730 02:50:38.776978 1658870 out.go:177] * Starting "ha-642542-m04" worker node in "ha-642542" cluster
	I0730 02:50:38.779895 1658870 cache.go:121] Beginning downloading kic base image for docker with crio
	I0730 02:50:38.781950 1658870 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0730 02:50:38.783862 1658870 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:50:38.783892 1658870 cache.go:56] Caching tarball of preloaded images
	I0730 02:50:38.784030 1658870 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0730 02:50:38.784041 1658870 preload.go:172] Found /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0730 02:50:38.784054 1658870 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 02:50:38.784178 1658870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/config.json ...
	W0730 02:50:38.803492 1658870 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0730 02:50:38.803535 1658870 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0730 02:50:38.803608 1658870 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0730 02:50:38.803634 1658870 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0730 02:50:38.803642 1658870 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0730 02:50:38.803650 1658870 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0730 02:50:38.803656 1658870 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0730 02:50:38.804947 1658870 image.go:273] response: 
	I0730 02:50:38.931373 1658870 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0730 02:50:38.931418 1658870 cache.go:194] Successfully downloaded all kic artifacts
	I0730 02:50:38.931448 1658870 start.go:360] acquireMachinesLock for ha-642542-m04: {Name:mk0b8fb95ae375a932efa437547c75d87ba68b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0730 02:50:38.931515 1658870 start.go:364] duration metric: took 44.258µs to acquireMachinesLock for "ha-642542-m04"
	I0730 02:50:38.931545 1658870 start.go:96] Skipping create...Using existing machine configuration
	I0730 02:50:38.931554 1658870 fix.go:54] fixHost starting: m04
	I0730 02:50:38.931829 1658870 cli_runner.go:164] Run: docker container inspect ha-642542-m04 --format={{.State.Status}}
	I0730 02:50:38.954177 1658870 fix.go:112] recreateIfNeeded on ha-642542-m04: state=Stopped err=<nil>
	W0730 02:50:38.954214 1658870 fix.go:138] unexpected machine state, will restart: <nil>
	I0730 02:50:38.957699 1658870 out.go:177] * Restarting existing docker container for "ha-642542-m04" ...
	I0730 02:50:38.959567 1658870 cli_runner.go:164] Run: docker start ha-642542-m04
	I0730 02:50:39.289054 1658870 cli_runner.go:164] Run: docker container inspect ha-642542-m04 --format={{.State.Status}}
	I0730 02:50:39.309143 1658870 kic.go:430] container "ha-642542-m04" state is running.
	I0730 02:50:39.310432 1658870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542-m04
	I0730 02:50:39.336410 1658870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/config.json ...
	I0730 02:50:39.336891 1658870 machine.go:94] provisionDockerMachine start ...
	I0730 02:50:39.336967 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m04
	I0730 02:50:39.357135 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:50:39.357435 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38953 <nil> <nil>}
	I0730 02:50:39.357448 1658870 main.go:141] libmachine: About to run SSH command:
	hostname
	I0730 02:50:39.359000 1658870 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0730 02:50:42.496168 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-642542-m04
	
	I0730 02:50:42.496196 1658870 ubuntu.go:169] provisioning hostname "ha-642542-m04"
	I0730 02:50:42.496321 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m04
	I0730 02:50:42.517818 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:50:42.518118 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38953 <nil> <nil>}
	I0730 02:50:42.518136 1658870 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-642542-m04 && echo "ha-642542-m04" | sudo tee /etc/hostname
	I0730 02:50:42.673242 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-642542-m04
	
	I0730 02:50:42.673322 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m04
	I0730 02:50:42.691718 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:50:42.691952 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38953 <nil> <nil>}
	I0730 02:50:42.692001 1658870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-642542-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-642542-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-642542-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0730 02:50:42.824207 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0730 02:50:42.824241 1658870 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19348-1592571/.minikube CaCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19348-1592571/.minikube}
	I0730 02:50:42.824257 1658870 ubuntu.go:177] setting up certificates
	I0730 02:50:42.824267 1658870 provision.go:84] configureAuth start
	I0730 02:50:42.824336 1658870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542-m04
	I0730 02:50:42.845613 1658870 provision.go:143] copyHostCerts
	I0730 02:50:42.845662 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem
	I0730 02:50:42.845702 1658870 exec_runner.go:144] found /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem, removing ...
	I0730 02:50:42.845714 1658870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem
	I0730 02:50:42.845797 1658870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/cert.pem (1123 bytes)
	I0730 02:50:42.845885 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem
	I0730 02:50:42.845907 1658870 exec_runner.go:144] found /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem, removing ...
	I0730 02:50:42.845918 1658870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem
	I0730 02:50:42.845945 1658870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/key.pem (1675 bytes)
	I0730 02:50:42.845994 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem
	I0730 02:50:42.846014 1658870 exec_runner.go:144] found /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem, removing ...
	I0730 02:50:42.846022 1658870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem
	I0730 02:50:42.846046 1658870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.pem (1078 bytes)
	I0730 02:50:42.846102 1658870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem org=jenkins.ha-642542-m04 san=[127.0.0.1 192.168.49.5 ha-642542-m04 localhost minikube]
	I0730 02:50:43.086070 1658870 provision.go:177] copyRemoteCerts
	I0730 02:50:43.086141 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0730 02:50:43.086189 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m04
	I0730 02:50:43.107729 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38953 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m04/id_rsa Username:docker}
	I0730 02:50:43.205604 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0730 02:50:43.205672 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0730 02:50:43.233594 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0730 02:50:43.233660 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0730 02:50:43.282140 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0730 02:50:43.282202 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0730 02:50:43.316662 1658870 provision.go:87] duration metric: took 492.380601ms to configureAuth
	I0730 02:50:43.316694 1658870 ubuntu.go:193] setting minikube options for container-runtime
	I0730 02:50:43.316947 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:50:43.317068 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m04
	I0730 02:50:43.339932 1658870 main.go:141] libmachine: Using SSH client type: native
	I0730 02:50:43.340295 1658870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 38953 <nil> <nil>}
	I0730 02:50:43.340427 1658870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0730 02:50:43.630237 1658870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0730 02:50:43.630302 1658870 machine.go:97] duration metric: took 4.293395055s to provisionDockerMachine
	I0730 02:50:43.630329 1658870 start.go:293] postStartSetup for "ha-642542-m04" (driver="docker")
	I0730 02:50:43.630366 1658870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0730 02:50:43.630447 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0730 02:50:43.630521 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m04
	I0730 02:50:43.647893 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38953 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m04/id_rsa Username:docker}
	I0730 02:50:43.745238 1658870 ssh_runner.go:195] Run: cat /etc/os-release
	I0730 02:50:43.748576 1658870 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0730 02:50:43.748613 1658870 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0730 02:50:43.748623 1658870 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0730 02:50:43.748630 1658870 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0730 02:50:43.748643 1658870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19348-1592571/.minikube/addons for local assets ...
	I0730 02:50:43.748704 1658870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19348-1592571/.minikube/files for local assets ...
	I0730 02:50:43.748791 1658870 filesync.go:149] local asset: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem -> 15979582.pem in /etc/ssl/certs
	I0730 02:50:43.748803 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem -> /etc/ssl/certs/15979582.pem
	I0730 02:50:43.748907 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0730 02:50:43.757677 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem --> /etc/ssl/certs/15979582.pem (1708 bytes)
	I0730 02:50:43.788362 1658870 start.go:296] duration metric: took 157.993857ms for postStartSetup
	I0730 02:50:43.788456 1658870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 02:50:43.788497 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m04
	I0730 02:50:43.806864 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38953 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m04/id_rsa Username:docker}
	I0730 02:50:43.900930 1658870 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0730 02:50:43.905757 1658870 fix.go:56] duration metric: took 4.974196089s for fixHost
	I0730 02:50:43.905779 1658870 start.go:83] releasing machines lock for "ha-642542-m04", held for 4.974247239s
	I0730 02:50:43.905846 1658870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542-m04
	I0730 02:50:43.925673 1658870 out.go:177] * Found network options:
	I0730 02:50:43.927407 1658870 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0730 02:50:43.929140 1658870 proxy.go:119] fail to check proxy env: Error ip not in block
	W0730 02:50:43.929170 1658870 proxy.go:119] fail to check proxy env: Error ip not in block
	W0730 02:50:43.929199 1658870 proxy.go:119] fail to check proxy env: Error ip not in block
	W0730 02:50:43.929211 1658870 proxy.go:119] fail to check proxy env: Error ip not in block
	I0730 02:50:43.929280 1658870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0730 02:50:43.929322 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m04
	I0730 02:50:43.929344 1658870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0730 02:50:43.929405 1658870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m04
	I0730 02:50:43.955358 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38953 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m04/id_rsa Username:docker}
	I0730 02:50:43.960099 1658870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38953 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m04/id_rsa Username:docker}
	I0730 02:50:44.232945 1658870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0730 02:50:44.238117 1658870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 02:50:44.248402 1658870 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0730 02:50:44.248510 1658870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0730 02:50:44.258373 1658870 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0730 02:50:44.258395 1658870 start.go:495] detecting cgroup driver to use...
	I0730 02:50:44.258441 1658870 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0730 02:50:44.258495 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0730 02:50:44.272780 1658870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0730 02:50:44.284879 1658870 docker.go:217] disabling cri-docker service (if available) ...
	I0730 02:50:44.284939 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0730 02:50:44.298629 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0730 02:50:44.312904 1658870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0730 02:50:44.405501 1658870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0730 02:50:44.497115 1658870 docker.go:233] disabling docker service ...
	I0730 02:50:44.497197 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0730 02:50:44.512047 1658870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0730 02:50:44.533323 1658870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0730 02:50:44.629897 1658870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0730 02:50:44.728257 1658870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0730 02:50:44.741756 1658870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0730 02:50:44.758909 1658870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0730 02:50:44.759031 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:50:44.770062 1658870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0730 02:50:44.770135 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:50:44.779843 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:50:44.789973 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:50:44.799765 1658870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0730 02:50:44.809132 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:50:44.820122 1658870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:50:44.830072 1658870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0730 02:50:44.841516 1658870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0730 02:50:44.854507 1658870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0730 02:50:44.863090 1658870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:50:44.956677 1658870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0730 02:50:45.115074 1658870 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0730 02:50:45.115191 1658870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0730 02:50:45.120883 1658870 start.go:563] Will wait 60s for crictl version
	I0730 02:50:45.121082 1658870 ssh_runner.go:195] Run: which crictl
	I0730 02:50:45.127186 1658870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0730 02:50:45.221138 1658870 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0730 02:50:45.221264 1658870 ssh_runner.go:195] Run: crio --version
	I0730 02:50:45.289375 1658870 ssh_runner.go:195] Run: crio --version
	I0730 02:50:45.338753 1658870 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0730 02:50:45.340469 1658870 out.go:177]   - env NO_PROXY=192.168.49.2
	I0730 02:50:45.342189 1658870 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0730 02:50:45.344073 1658870 cli_runner.go:164] Run: docker network inspect ha-642542 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0730 02:50:45.360183 1658870 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0730 02:50:45.379848 1658870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 02:50:45.391746 1658870 mustload.go:65] Loading cluster: ha-642542
	I0730 02:50:45.392091 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:50:45.392354 1658870 cli_runner.go:164] Run: docker container inspect ha-642542 --format={{.State.Status}}
	I0730 02:50:45.409096 1658870 host.go:66] Checking if "ha-642542" exists ...
	I0730 02:50:45.409368 1658870 certs.go:68] Setting up /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542 for IP: 192.168.49.5
	I0730 02:50:45.409376 1658870 certs.go:194] generating shared ca certs ...
	I0730 02:50:45.409391 1658870 certs.go:226] acquiring lock for ca certs: {Name:mkd188f515cf1f581cef2c6a3cc946da59d73d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:50:45.409516 1658870 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key
	I0730 02:50:45.409556 1658870 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key
	I0730 02:50:45.409567 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0730 02:50:45.409580 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0730 02:50:45.409591 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0730 02:50:45.409601 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0730 02:50:45.409657 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958.pem (1338 bytes)
	W0730 02:50:45.409686 1658870 certs.go:480] ignoring /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958_empty.pem, impossibly tiny 0 bytes
	I0730 02:50:45.409694 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca-key.pem (1679 bytes)
	I0730 02:50:45.409722 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/ca.pem (1078 bytes)
	I0730 02:50:45.409743 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/cert.pem (1123 bytes)
	I0730 02:50:45.409768 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/key.pem (1675 bytes)
	I0730 02:50:45.409813 1658870 certs.go:484] found cert: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem (1708 bytes)
	I0730 02:50:45.409840 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem -> /usr/share/ca-certificates/15979582.pem
	I0730 02:50:45.409854 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:50:45.409867 1658870 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958.pem -> /usr/share/ca-certificates/1597958.pem
	I0730 02:50:45.409885 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0730 02:50:45.436771 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0730 02:50:45.463128 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0730 02:50:45.489968 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0730 02:50:45.516901 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/ssl/certs/15979582.pem --> /usr/share/ca-certificates/15979582.pem (1708 bytes)
	I0730 02:50:45.543976 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0730 02:50:45.569911 1658870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19348-1592571/.minikube/certs/1597958.pem --> /usr/share/ca-certificates/1597958.pem (1338 bytes)
	I0730 02:50:45.597966 1658870 ssh_runner.go:195] Run: openssl version
	I0730 02:50:45.603915 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15979582.pem && ln -fs /usr/share/ca-certificates/15979582.pem /etc/ssl/certs/15979582.pem"
	I0730 02:50:45.614112 1658870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15979582.pem
	I0730 02:50:45.617992 1658870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 02:37 /usr/share/ca-certificates/15979582.pem
	I0730 02:50:45.618119 1658870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15979582.pem
	I0730 02:50:45.625284 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15979582.pem /etc/ssl/certs/3ec20f2e.0"
	I0730 02:50:45.634677 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0730 02:50:45.644820 1658870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:50:45.648640 1658870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:50:45.648783 1658870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0730 02:50:45.656620 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0730 02:50:45.665826 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1597958.pem && ln -fs /usr/share/ca-certificates/1597958.pem /etc/ssl/certs/1597958.pem"
	I0730 02:50:45.675622 1658870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1597958.pem
	I0730 02:50:45.679099 1658870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 02:37 /usr/share/ca-certificates/1597958.pem
	I0730 02:50:45.679166 1658870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1597958.pem
	I0730 02:50:45.688196 1658870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1597958.pem /etc/ssl/certs/51391683.0"
	I0730 02:50:45.698272 1658870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0730 02:50:45.701987 1658870 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0730 02:50:45.702031 1658870 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.30.3  false true} ...
	I0730 02:50:45.702113 1658870 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-642542-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-642542 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0730 02:50:45.702209 1658870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0730 02:50:45.711172 1658870 binaries.go:44] Found k8s binaries, skipping transfer
	I0730 02:50:45.711258 1658870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0730 02:50:45.720089 1658870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0730 02:50:45.738455 1658870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0730 02:50:45.756959 1658870 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0730 02:50:45.760539 1658870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0730 02:50:45.772253 1658870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:50:45.862601 1658870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 02:50:45.874359 1658870 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0730 02:50:45.874860 1658870 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:50:45.877277 1658870 out.go:177] * Verifying Kubernetes components...
	I0730 02:50:45.879468 1658870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0730 02:50:45.976040 1658870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0730 02:50:45.988617 1658870 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:50:45.988889 1658870 kapi.go:59] client config for ha-642542: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/client.crt", KeyFile:"/home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/ha-642542/client.key", CAFile:"/home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17a5cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0730 02:50:45.988963 1658870 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0730 02:50:45.989185 1658870 node_ready.go:35] waiting up to 6m0s for node "ha-642542-m04" to be "Ready" ...
	I0730 02:50:45.989258 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m04
	I0730 02:50:45.989267 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:45.989276 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:45.989281 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:45.992131 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:45.992731 1658870 node_ready.go:49] node "ha-642542-m04" has status "Ready":"True"
	I0730 02:50:45.992755 1658870 node_ready.go:38] duration metric: took 3.551955ms for node "ha-642542-m04" to be "Ready" ...
	I0730 02:50:45.992765 1658870 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 02:50:45.992824 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0730 02:50:45.992837 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:45.992852 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:45.992861 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:46.000199 1658870 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0730 02:50:46.016325 1658870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace to be "Ready" ...
	I0730 02:50:46.016471 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:46.016479 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:46.016488 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:46.016495 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:46.019647 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:46.020757 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:46.020779 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:46.020789 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:46.020792 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:46.025558 1658870 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 02:50:46.516843 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:46.516867 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:46.516877 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:46.516882 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:46.519920 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:46.520710 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:46.520734 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:46.520744 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:46.520748 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:46.523442 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:47.016555 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:47.016577 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:47.016586 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:47.016591 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:47.021646 1658870 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 02:50:47.022777 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:47.022796 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:47.022814 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:47.022820 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:47.026218 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:47.516585 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:47.516612 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:47.516622 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:47.516626 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:47.519642 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:47.520528 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:47.520552 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:47.520563 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:47.520568 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:47.523213 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:48.017253 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:48.017277 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:48.017287 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:48.017292 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:48.021592 1658870 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 02:50:48.022870 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:48.022895 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:48.022908 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:48.022913 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:48.025916 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:48.026619 1658870 pod_ready.go:102] pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace has status "Ready":"False"
	I0730 02:50:48.517328 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:48.517354 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:48.517364 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:48.517370 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:48.522892 1658870 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0730 02:50:48.524601 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:48.524626 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:48.524635 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:48.524639 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:48.527613 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:49.017033 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:49.017055 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:49.017064 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:49.017086 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:49.019949 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:49.020643 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:49.020662 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:49.020672 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:49.020675 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:49.023140 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:49.517176 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:49.517202 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:49.517212 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:49.517216 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:49.520832 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:49.521613 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:49.521635 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:49.521644 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:49.521649 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:49.524248 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:50.017294 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:50.017322 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:50.017333 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:50.017339 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:50.020625 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:50.022004 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:50.022027 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:50.022037 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:50.022040 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:50.025188 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:50.517529 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:50.517557 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:50.517567 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:50.517571 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:50.520484 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:50.521337 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:50.521356 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:50.521365 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:50.521369 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:50.524418 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:50.525070 1658870 pod_ready.go:102] pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace has status "Ready":"False"
	I0730 02:50:51.017243 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:51.017269 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:51.017283 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:51.017288 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:51.020548 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:51.021578 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:51.021600 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:51.021609 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:51.021632 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:51.025300 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:51.516584 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:51.516607 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:51.516616 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:51.516620 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:51.519930 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:51.520661 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:51.520681 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:51.520691 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:51.520695 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:51.523321 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:52.016601 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:52.016626 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:52.016635 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:52.016640 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:52.019603 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:52.020408 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:52.020427 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:52.020437 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:52.020441 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:52.023235 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:52.517202 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:52.517225 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:52.517234 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:52.517240 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:52.520162 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:52.521295 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:52.521315 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:52.521325 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:52.521363 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:52.524123 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:53.017233 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:53.017255 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:53.017265 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:53.017268 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:53.020434 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:53.021436 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:53.021466 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:53.021476 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:53.021502 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:53.024334 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:53.024858 1658870 pod_ready.go:102] pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace has status "Ready":"False"
	I0730 02:50:53.516611 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:53.516634 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:53.516644 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:53.516650 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:53.519571 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:53.520311 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:53.520323 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:53.520332 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:53.520335 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:53.523014 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:54.017238 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:54.017261 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:54.017271 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:54.017274 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:54.021825 1658870 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 02:50:54.022831 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:54.022849 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:54.022858 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:54.022863 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:54.025718 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:54.516951 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:54.516975 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:54.516997 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:54.517005 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:54.520867 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:54.521950 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:54.521972 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:54.521982 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:54.521987 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:54.524556 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:55.016728 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:55.016761 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:55.016772 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:55.016776 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:55.023703 1658870 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0730 02:50:55.024825 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:55.024854 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:55.024864 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:55.024870 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:55.028337 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:55.028965 1658870 pod_ready.go:102] pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace has status "Ready":"False"
	I0730 02:50:55.516885 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:55.516907 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:55.516922 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:55.516929 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:55.519686 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:55.520501 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:55.520523 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:55.520532 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:55.520535 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:55.522897 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:56.017257 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:56.017281 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:56.017292 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:56.017297 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:56.020445 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:56.021690 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:56.021714 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:56.021725 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:56.021735 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:56.024974 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:56.516774 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:56.516799 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:56.516817 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:56.516880 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:56.520252 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:56.520953 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:56.520973 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:56.520994 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:56.521001 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:56.523919 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:57.016528 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:57.016549 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:57.016568 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:57.016574 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:57.020171 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:57.021343 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:57.021365 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:57.021377 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:57.021382 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:57.024101 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:57.517211 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:57.517236 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:57.517246 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:57.517262 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:57.520194 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:57.521012 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:57.521033 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:57.521043 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:57.521049 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:57.523382 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:57.524166 1658870 pod_ready.go:102] pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace has status "Ready":"False"
	I0730 02:50:58.017523 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:58.017549 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:58.017560 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:58.017564 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:58.021131 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:58.021925 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:58.021948 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:58.021958 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:58.021964 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:58.025063 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:50:58.516609 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:58.516634 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:58.516644 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:58.516650 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:58.519540 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:58.520417 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:58.520438 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:58.520448 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:58.520453 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:58.523134 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:59.016557 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:59.016584 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:59.016594 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:59.016600 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:59.019424 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:59.020201 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:59.020218 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:59.020227 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:59.020232 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:59.022624 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:59.517190 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:50:59.517264 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:59.517280 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:59.517287 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:59.520178 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:50:59.520923 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:50:59.520942 1658870 round_trippers.go:469] Request Headers:
	I0730 02:50:59.520951 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:50:59.520955 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:50:59.523505 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:00.017394 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:00.017418 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:00.017428 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:00.017435 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:00.043429 1658870 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0730 02:51:00.053842 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:00.053933 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:00.053961 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:00.053985 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:00.073842 1658870 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0730 02:51:00.074983 1658870 pod_ready.go:102] pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace has status "Ready":"False"
	I0730 02:51:00.516686 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:00.516723 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:00.516733 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:00.516739 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:00.520756 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:00.524814 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:00.524835 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:00.524844 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:00.524849 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:00.528227 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:01.016606 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:01.016638 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:01.016648 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:01.016654 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:01.019559 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:01.020875 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:01.020894 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:01.020903 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:01.020913 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:01.023854 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:01.517240 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:01.517262 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:01.517271 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:01.517275 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:01.520525 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:01.521833 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:01.521851 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:01.521869 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:01.521877 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:01.524881 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:02.017237 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:02.017308 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:02.017330 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:02.017350 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:02.021172 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:02.022392 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:02.022459 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:02.022482 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:02.022502 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:02.025675 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:02.517163 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:02.517187 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:02.517197 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:02.517207 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:02.520158 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:02.521349 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:02.521369 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:02.521377 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:02.521383 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:02.524078 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:02.524904 1658870 pod_ready.go:102] pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace has status "Ready":"False"
	I0730 02:51:03.029233 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:03.029264 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:03.029280 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:03.029289 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:03.033191 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:03.034047 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:03.034066 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:03.034076 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:03.034082 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:03.036768 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:03.517247 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:03.517269 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:03.517278 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:03.517282 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:03.520299 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:03.521402 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:03.521425 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:03.521434 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:03.521439 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:03.523841 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:04.017360 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:04.017384 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:04.017393 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:04.017397 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:04.020822 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:04.021828 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:04.021853 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:04.021863 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:04.021867 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:04.024651 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:04.517253 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:04.517273 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:04.517282 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:04.517286 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:04.522060 1658870 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 02:51:04.523255 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:04.523272 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:04.523281 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:04.523285 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:04.525841 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:04.526874 1658870 pod_ready.go:102] pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace has status "Ready":"False"
	I0730 02:51:05.017264 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:05.017294 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:05.017304 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:05.017310 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:05.020469 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:05.021407 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:05.021432 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:05.021442 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:05.021446 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:05.024538 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:05.517434 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:05.517461 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:05.517469 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:05.517474 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:05.520279 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:05.521031 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:05.521052 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:05.521061 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:05.521066 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:05.523567 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:06.016650 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:06.016675 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:06.016685 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:06.016690 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:06.020364 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:06.021077 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:06.021099 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:06.021109 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:06.021114 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:06.024200 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:06.517106 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:06.517171 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:06.517187 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:06.517192 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:06.520306 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:06.521042 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:06.521064 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:06.521073 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:06.521077 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:06.525562 1658870 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 02:51:07.016772 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:07.016795 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.016805 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.016809 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.019880 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:07.020564 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:07.020587 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.020597 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.020601 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.023031 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:07.023923 1658870 pod_ready.go:102] pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace has status "Ready":"False"
	I0730 02:51:07.517097 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5shks
	I0730 02:51:07.517120 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.517131 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.517136 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.520003 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:07.521113 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:07.521134 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.521144 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.521150 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.523804 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:07.524453 1658870 pod_ready.go:97] node "ha-642542" hosting pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:07.524478 1658870 pod_ready.go:81] duration metric: took 21.508112957s for pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:07.524489 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542" hosting pod "coredns-7db6d8ff4d-5shks" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:07.524495 1658870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7vr5f" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:07.524566 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7vr5f
	I0730 02:51:07.524578 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.524586 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.524591 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.528082 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:07.528739 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:07.528750 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.528758 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.528762 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.531172 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:07.531676 1658870 pod_ready.go:97] node "ha-642542" hosting pod "coredns-7db6d8ff4d-7vr5f" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:07.531691 1658870 pod_ready.go:81] duration metric: took 7.177525ms for pod "coredns-7db6d8ff4d-7vr5f" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:07.531701 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542" hosting pod "coredns-7db6d8ff4d-7vr5f" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:07.531709 1658870 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:07.531768 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-642542
	I0730 02:51:07.531773 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.531781 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.531784 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.534220 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:07.535037 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:07.535057 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.535064 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.535069 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.537533 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:07.538053 1658870 pod_ready.go:97] node "ha-642542" hosting pod "etcd-ha-642542" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:07.538068 1658870 pod_ready.go:81] duration metric: took 6.352782ms for pod "etcd-ha-642542" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:07.538077 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542" hosting pod "etcd-ha-642542" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:07.538085 1658870 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:07.538142 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-642542-m02
	I0730 02:51:07.538147 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.538155 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.538161 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.540842 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:07.541695 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:51:07.541713 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.541723 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.541727 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.544800 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:07.545621 1658870 pod_ready.go:92] pod "etcd-ha-642542-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 02:51:07.545644 1658870 pod_ready.go:81] duration metric: took 7.552105ms for pod "etcd-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:07.545656 1658870 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:07.545742 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-642542-m03
	I0730 02:51:07.545768 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.545777 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.545794 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.548283 1658870 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0730 02:51:07.548447 1658870 pod_ready.go:97] error getting pod "etcd-ha-642542-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-642542-m03" not found
	I0730 02:51:07.548467 1658870 pod_ready.go:81] duration metric: took 2.800622ms for pod "etcd-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:07.548480 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-ha-642542-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-642542-m03" not found
	I0730 02:51:07.548508 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:07.548576 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542
	I0730 02:51:07.548586 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.548594 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.548598 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.551366 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:07.717645 1658870 request.go:629] Waited for 165.280838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:07.717701 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:07.717707 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.717716 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.717725 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.720560 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:07.721216 1658870 pod_ready.go:97] node "ha-642542" hosting pod "kube-apiserver-ha-642542" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:07.721237 1658870 pod_ready.go:81] duration metric: took 172.71882ms for pod "kube-apiserver-ha-642542" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:07.721248 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542" hosting pod "kube-apiserver-ha-642542" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:07.721255 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:07.917156 1658870 request.go:629] Waited for 195.832486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542-m02
	I0730 02:51:07.917245 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542-m02
	I0730 02:51:07.917281 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:07.917293 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:07.917328 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:07.920350 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:08.117272 1658870 request.go:629] Waited for 196.278678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:51:08.117331 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:51:08.117344 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:08.117359 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:08.117365 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:08.120078 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:08.120708 1658870 pod_ready.go:92] pod "kube-apiserver-ha-642542-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 02:51:08.120730 1658870 pod_ready.go:81] duration metric: took 399.462414ms for pod "kube-apiserver-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:08.120741 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:08.317622 1658870 request.go:629] Waited for 196.816398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542-m03
	I0730 02:51:08.317684 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542-m03
	I0730 02:51:08.317695 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:08.317704 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:08.317712 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:08.320309 1658870 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0730 02:51:08.320611 1658870 pod_ready.go:97] error getting pod "kube-apiserver-ha-642542-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-642542-m03" not found
	I0730 02:51:08.320637 1658870 pod_ready.go:81] duration metric: took 199.887584ms for pod "kube-apiserver-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:08.320648 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-ha-642542-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-642542-m03" not found
	I0730 02:51:08.320658 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:08.518063 1658870 request.go:629] Waited for 197.329402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542
	I0730 02:51:08.518123 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542
	I0730 02:51:08.518135 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:08.518145 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:08.518152 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:08.520956 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:08.718070 1658870 request.go:629] Waited for 196.300463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:08.718188 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:08.718236 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:08.718268 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:08.718291 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:08.720895 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:08.721581 1658870 pod_ready.go:97] node "ha-642542" hosting pod "kube-controller-manager-ha-642542" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:08.721609 1658870 pod_ready.go:81] duration metric: took 400.939072ms for pod "kube-controller-manager-ha-642542" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:08.721620 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542" hosting pod "kube-controller-manager-ha-642542" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:08.721628 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:08.917145 1658870 request.go:629] Waited for 195.452861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542-m02
	I0730 02:51:08.917219 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542-m02
	I0730 02:51:08.917264 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:08.917286 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:08.917292 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:08.920061 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:09.117125 1658870 request.go:629] Waited for 196.199362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:51:09.117187 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:51:09.117197 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:09.117206 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:09.117216 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:09.120135 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:09.120788 1658870 pod_ready.go:92] pod "kube-controller-manager-ha-642542-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 02:51:09.120807 1658870 pod_ready.go:81] duration metric: took 399.170385ms for pod "kube-controller-manager-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:09.120819 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:09.318080 1658870 request.go:629] Waited for 197.190944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542-m03
	I0730 02:51:09.318149 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-642542-m03
	I0730 02:51:09.318162 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:09.318172 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:09.318180 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:09.321074 1658870 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0730 02:51:09.321489 1658870 pod_ready.go:97] error getting pod "kube-controller-manager-ha-642542-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-642542-m03" not found
	I0730 02:51:09.321514 1658870 pod_ready.go:81] duration metric: took 200.687107ms for pod "kube-controller-manager-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:09.321524 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-642542-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-642542-m03" not found
	I0730 02:51:09.321532 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-72lmf" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:09.517992 1658870 request.go:629] Waited for 196.388068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72lmf
	I0730 02:51:09.518120 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72lmf
	I0730 02:51:09.518127 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:09.518135 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:09.518138 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:09.521361 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:09.717428 1658870 request.go:629] Waited for 195.362427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:09.717497 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:09.717507 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:09.717516 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:09.717521 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:09.720315 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:09.720892 1658870 pod_ready.go:97] node "ha-642542" hosting pod "kube-proxy-72lmf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:09.720911 1658870 pod_ready.go:81] duration metric: took 399.372571ms for pod "kube-proxy-72lmf" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:09.720921 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542" hosting pod "kube-proxy-72lmf" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:09.720929 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7rrfn" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:09.917152 1658870 request.go:629] Waited for 196.140475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrfn
	I0730 02:51:09.917221 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7rrfn
	I0730 02:51:09.917229 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:09.917244 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:09.917253 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:09.920342 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:10.117099 1658870 request.go:629] Waited for 196.052733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m04
	I0730 02:51:10.117254 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m04
	I0730 02:51:10.117279 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:10.117301 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:10.117322 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:10.121637 1658870 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0730 02:51:10.123669 1658870 pod_ready.go:92] pod "kube-proxy-7rrfn" in "kube-system" namespace has status "Ready":"True"
	I0730 02:51:10.123774 1658870 pod_ready.go:81] duration metric: took 402.831426ms for pod "kube-proxy-7rrfn" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:10.123809 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7txb9" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:10.317178 1658870 request.go:629] Waited for 193.285751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7txb9
	I0730 02:51:10.317266 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7txb9
	I0730 02:51:10.317295 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:10.317321 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:10.317332 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:10.320023 1658870 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0730 02:51:10.320158 1658870 pod_ready.go:97] error getting pod "kube-proxy-7txb9" in "kube-system" namespace (skipping!): pods "kube-proxy-7txb9" not found
	I0730 02:51:10.320175 1658870 pod_ready.go:81] duration metric: took 196.346207ms for pod "kube-proxy-7txb9" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:10.320187 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-7txb9" in "kube-system" namespace (skipping!): pods "kube-proxy-7txb9" not found
	I0730 02:51:10.320194 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqcsg" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:10.517601 1658870 request.go:629] Waited for 197.324669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqcsg
	I0730 02:51:10.518109 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bqcsg
	I0730 02:51:10.518125 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:10.518135 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:10.518141 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:10.521138 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:10.717996 1658870 request.go:629] Waited for 195.817227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:51:10.718061 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:51:10.718072 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:10.718081 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:10.718085 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:10.720867 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:10.721639 1658870 pod_ready.go:92] pod "kube-proxy-bqcsg" in "kube-system" namespace has status "Ready":"True"
	I0730 02:51:10.721661 1658870 pod_ready.go:81] duration metric: took 401.458413ms for pod "kube-proxy-bqcsg" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:10.721686 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-642542" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:10.917138 1658870 request.go:629] Waited for 195.374759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542
	I0730 02:51:10.917232 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542
	I0730 02:51:10.917239 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:10.917254 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:10.917261 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:10.920343 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:11.117482 1658870 request.go:629] Waited for 196.360681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:11.117541 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542
	I0730 02:51:11.117551 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:11.117560 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:11.117573 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:11.120504 1658870 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0730 02:51:11.121366 1658870 pod_ready.go:97] node "ha-642542" hosting pod "kube-scheduler-ha-642542" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:11.121431 1658870 pod_ready.go:81] duration metric: took 399.727026ms for pod "kube-scheduler-ha-642542" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:11.121456 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-642542" hosting pod "kube-scheduler-ha-642542" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-642542" has status "Ready":"Unknown"
	I0730 02:51:11.121504 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:11.317988 1658870 request.go:629] Waited for 196.360394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542-m02
	I0730 02:51:11.318059 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542-m02
	I0730 02:51:11.318070 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:11.318084 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:11.318089 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:11.321258 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:11.517188 1658870 request.go:629] Waited for 195.236146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:51:11.517250 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-642542-m02
	I0730 02:51:11.517261 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:11.517270 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:11.517278 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:11.520551 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:11.521422 1658870 pod_ready.go:92] pod "kube-scheduler-ha-642542-m02" in "kube-system" namespace has status "Ready":"True"
	I0730 02:51:11.521447 1658870 pod_ready.go:81] duration metric: took 399.907272ms for pod "kube-scheduler-ha-642542-m02" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:11.521459 1658870 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	I0730 02:51:11.717478 1658870 request.go:629] Waited for 195.921036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542-m03
	I0730 02:51:11.717537 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-642542-m03
	I0730 02:51:11.717549 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:11.717580 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:11.717590 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:11.720340 1658870 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0730 02:51:11.720489 1658870 pod_ready.go:97] error getting pod "kube-scheduler-ha-642542-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-642542-m03" not found
	I0730 02:51:11.720505 1658870 pod_ready.go:81] duration metric: took 199.038942ms for pod "kube-scheduler-ha-642542-m03" in "kube-system" namespace to be "Ready" ...
	E0730 02:51:11.720515 1658870 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-642542-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-642542-m03" not found
	I0730 02:51:11.720527 1658870 pod_ready.go:38] duration metric: took 25.727753032s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0730 02:51:11.720546 1658870 system_svc.go:44] waiting for kubelet service to be running ....
	I0730 02:51:11.720611 1658870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 02:51:11.732627 1658870 system_svc.go:56] duration metric: took 12.070011ms WaitForService to wait for kubelet
	I0730 02:51:11.732698 1658870 kubeadm.go:582] duration metric: took 25.858294689s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0730 02:51:11.732731 1658870 node_conditions.go:102] verifying NodePressure condition ...
	I0730 02:51:11.918144 1658870 request.go:629] Waited for 185.326953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0730 02:51:11.918221 1658870 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0730 02:51:11.918232 1658870 round_trippers.go:469] Request Headers:
	I0730 02:51:11.918249 1658870 round_trippers.go:473]     Accept: application/json, */*
	I0730 02:51:11.918254 1658870 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0730 02:51:11.921723 1658870 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0730 02:51:11.922920 1658870 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0730 02:51:11.922947 1658870 node_conditions.go:123] node cpu capacity is 2
	I0730 02:51:11.922958 1658870 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0730 02:51:11.922963 1658870 node_conditions.go:123] node cpu capacity is 2
	I0730 02:51:11.922967 1658870 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0730 02:51:11.922972 1658870 node_conditions.go:123] node cpu capacity is 2
	I0730 02:51:11.922977 1658870 node_conditions.go:105] duration metric: took 190.240894ms to run NodePressure ...
	I0730 02:51:11.922993 1658870 start.go:241] waiting for startup goroutines ...
	I0730 02:51:11.923019 1658870 start.go:255] writing updated cluster config ...
	I0730 02:51:11.923336 1658870 ssh_runner.go:195] Run: rm -f paused
	I0730 02:51:12.007702 1658870 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0730 02:51:12.010277 1658870 out.go:177] * Done! kubectl is now configured to use "ha-642542" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 30 02:50:34 ha-642542 crio[645]: time="2024-07-30 02:50:34.244259851Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e9f5c835b48ebe5f597b9e72f9e5187d1e69d7075ca0b366fd52c507a7baaf24/merged/etc/group: no such file or directory"
	Jul 30 02:50:34 ha-642542 crio[645]: time="2024-07-30 02:50:34.297073309Z" level=info msg="Created container 49642553cbd3da4e3e1e6776a4ff7336068f209ad8b54a805367b613fa295c4d: kube-system/kube-vip-ha-642542/kube-vip" id=8deedc1d-2876-4058-9ac0-e84c36b60bdf name=/runtime.v1.RuntimeService/CreateContainer
	Jul 30 02:50:34 ha-642542 crio[645]: time="2024-07-30 02:50:34.297688170Z" level=info msg="Starting container: 49642553cbd3da4e3e1e6776a4ff7336068f209ad8b54a805367b613fa295c4d" id=b9e09db4-216b-4085-be6c-0b7f8f6c8e91 name=/runtime.v1.RuntimeService/StartContainer
	Jul 30 02:50:34 ha-642542 crio[645]: time="2024-07-30 02:50:34.312865927Z" level=info msg="Started container" PID=1826 containerID=49642553cbd3da4e3e1e6776a4ff7336068f209ad8b54a805367b613fa295c4d description=kube-system/kube-vip-ha-642542/kube-vip id=b9e09db4-216b-4085-be6c-0b7f8f6c8e91 name=/runtime.v1.RuntimeService/StartContainer sandboxID=57fa401f7a800e5ddf62a7ca421764c43bf6e83d82ec635e9df8971c19941a58
	Jul 30 02:50:42 ha-642542 conmon[1282]: conmon e038a7d060d566aec469 <ninfo>: container 1312 exited with status 1
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.243356732Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=556b44a4-d987-48f0-807e-812b9d7d92e1 name=/runtime.v1.ImageService/ImageStatus
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.243566951Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=556b44a4-d987-48f0-807e-812b9d7d92e1 name=/runtime.v1.ImageService/ImageStatus
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.244330206Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cfeab017-a378-49b8-b463-c7b354fead61 name=/runtime.v1.ImageService/ImageStatus
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.244518731Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cfeab017-a378-49b8-b463-c7b354fead61 name=/runtime.v1.ImageService/ImageStatus
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.245212154Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3937b1a6-034e-441c-accb-ea6df7dd0675 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.245911624Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.261460500Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/622c242e9d3be8b2abf21cb1f1858e8550a38493b8b7c1fc5a2e126aecaa6ed4/merged/etc/passwd: no such file or directory"
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.261622949Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/622c242e9d3be8b2abf21cb1f1858e8550a38493b8b7c1fc5a2e126aecaa6ed4/merged/etc/group: no such file or directory"
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.313420769Z" level=info msg="Created container be1c88e316e7e11910cb5a21f1f2499e84e9beed1f58456fd1b096346aa33ad0: kube-system/storage-provisioner/storage-provisioner" id=3937b1a6-034e-441c-accb-ea6df7dd0675 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.314523364Z" level=info msg="Starting container: be1c88e316e7e11910cb5a21f1f2499e84e9beed1f58456fd1b096346aa33ad0" id=e6d9d164-6860-4322-bc5a-efd1779e9125 name=/runtime.v1.RuntimeService/StartContainer
	Jul 30 02:50:43 ha-642542 crio[645]: time="2024-07-30 02:50:43.331741245Z" level=info msg="Started container" PID=1883 containerID=be1c88e316e7e11910cb5a21f1f2499e84e9beed1f58456fd1b096346aa33ad0 description=kube-system/storage-provisioner/storage-provisioner id=e6d9d164-6860-4322-bc5a-efd1779e9125 name=/runtime.v1.RuntimeService/StartContainer sandboxID=83e133cf3d934952ffbf096a43dc815ba08d09df4197aa081eb594d53c0eafca
	Jul 30 02:51:00 ha-642542 crio[645]: time="2024-07-30 02:51:00.018089748Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.30.3" id=e97af4a1-0ee9-4634-9ab8-2de5f31b4920 name=/runtime.v1.ImageService/ImageStatus
	Jul 30 02:51:00 ha-642542 crio[645]: time="2024-07-30 02:51:00.018351231Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499 registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7],Size_:108229958,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=e97af4a1-0ee9-4634-9ab8-2de5f31b4920 name=/runtime.v1.ImageService/ImageStatus
	Jul 30 02:51:00 ha-642542 crio[645]: time="2024-07-30 02:51:00.019365500Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.30.3" id=bbde4dd0-86f5-4f33-bb4c-94f6a71beb60 name=/runtime.v1.ImageService/ImageStatus
	Jul 30 02:51:00 ha-642542 crio[645]: time="2024-07-30 02:51:00.019635680Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499 registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7],Size_:108229958,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=bbde4dd0-86f5-4f33-bb4c-94f6a71beb60 name=/runtime.v1.ImageService/ImageStatus
	Jul 30 02:51:00 ha-642542 crio[645]: time="2024-07-30 02:51:00.020901143Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-642542/kube-controller-manager" id=8643795e-ea09-4da6-8743-deb57b29f6fb name=/runtime.v1.RuntimeService/CreateContainer
	Jul 30 02:51:00 ha-642542 crio[645]: time="2024-07-30 02:51:00.041694750Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 30 02:51:00 ha-642542 crio[645]: time="2024-07-30 02:51:00.261936227Z" level=info msg="Created container ed1991eedbf8795f074230fd86fdbe4ca438eee531eb87a4008bd1c995e389c5: kube-system/kube-controller-manager-ha-642542/kube-controller-manager" id=8643795e-ea09-4da6-8743-deb57b29f6fb name=/runtime.v1.RuntimeService/CreateContainer
	Jul 30 02:51:00 ha-642542 crio[645]: time="2024-07-30 02:51:00.262609499Z" level=info msg="Starting container: ed1991eedbf8795f074230fd86fdbe4ca438eee531eb87a4008bd1c995e389c5" id=e0634755-817c-4200-8a5a-54c6821f7935 name=/runtime.v1.RuntimeService/StartContainer
	Jul 30 02:51:00 ha-642542 crio[645]: time="2024-07-30 02:51:00.272756840Z" level=info msg="Started container" PID=1922 containerID=ed1991eedbf8795f074230fd86fdbe4ca438eee531eb87a4008bd1c995e389c5 description=kube-system/kube-controller-manager-ha-642542/kube-controller-manager id=e0634755-817c-4200-8a5a-54c6821f7935 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3aa5c7525098764c839f845a90bd45ce4e4aca6dc585446dc3d2343bb7f2abb2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ed1991eedbf87       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a   14 seconds ago       Running             kube-controller-manager   8                   3aa5c75250987       kube-controller-manager-ha-642542
	be1c88e316e7e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   31 seconds ago       Running             storage-provisioner       4                   83e133cf3d934       storage-provisioner
	49642553cbd3d       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   40 seconds ago       Running             kube-vip                  3                   57fa401f7a800       kube-vip-ha-642542
	e984781037ce0       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca   44 seconds ago       Running             kube-apiserver            4                   eaa765ad12631       kube-apiserver-ha-642542
	bb11e54814707       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a   59 seconds ago       Exited              kube-controller-manager   7                   3aa5c75250987       kube-controller-manager-ha-642542
	121c3279b1b60       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   f8683e3956f0b       coredns-7db6d8ff4d-7vr5f
	393f929c61d92       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   db0d5110b1076       busybox-fc5497c4f-hjvpk
	93c53cd34f2cc       f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800   About a minute ago   Running             kindnet-cni               2                   250ebe345c9e5       kindnet-48qbs
	d2281e6def2b8       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   518c3d5d92787       coredns-7db6d8ff4d-5shks
	e038a7d060d56       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       3                   83e133cf3d934       storage-provisioner
	6fc500b1cc841       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be   About a minute ago   Running             kube-proxy                2                   fa15cb7877265       kube-proxy-72lmf
	c2192e4c0ee96       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355   About a minute ago   Running             kube-scheduler            2                   397a5dc8a6b6b       kube-scheduler-ha-642542
	14496513c3919       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   About a minute ago   Exited              kube-vip                  2                   57fa401f7a800       kube-vip-ha-642542
	3572d340b8ba0       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd   About a minute ago   Running             etcd                      2                   7e2afcc5d1ab7       etcd-ha-642542
	c08ef6e89d3d9       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca   About a minute ago   Exited              kube-apiserver            3                   eaa765ad12631       kube-apiserver-ha-642542
	
	
	==> coredns [121c3279b1b6080bcacc8f9e2297ded63cefff014ce364ba77469ed5f24a84a2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46199 - 7249 "HINFO IN 2387649358686852634.7817028372463915337. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023163171s
	
	
	==> coredns [d2281e6def2b897a965aea357e786c5d70ec0b388258be43ce29c79fee48a642] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50856 - 59828 "HINFO IN 1284202220714038339.4975000722431632473. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021492722s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[372250361]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (30-Jul-2024 02:50:12.256) (total time: 30001ms):
	Trace[372250361]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (02:50:42.258)
	Trace[372250361]: [30.001542212s] [30.001542212s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[574808575]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (30-Jul-2024 02:50:12.257) (total time: 30000ms):
	Trace[574808575]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (02:50:42.258)
	Trace[574808575]: [30.000979526s] [30.000979526s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[28719544]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (30-Jul-2024 02:50:12.257) (total time: 30001ms):
	Trace[28719544]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (02:50:42.258)
	Trace[28719544]: [30.001773664s] [30.001773664s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-642542
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-642542
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9
	                    minikube.k8s.io/name=ha-642542
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_30T02_40_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 02:40:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-642542
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 02:50:25 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 30 Jul 2024 02:49:54 +0000   Tue, 30 Jul 2024 02:51:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 30 Jul 2024 02:49:54 +0000   Tue, 30 Jul 2024 02:51:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 30 Jul 2024 02:49:54 +0000   Tue, 30 Jul 2024 02:51:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 30 Jul 2024 02:49:54 +0000   Tue, 30 Jul 2024 02:51:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-642542
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb2872f4c09f41b3a2eb69043140c0ae
	  System UUID:                7e37aeb9-dede-44f8-b40a-12de7618dcb9
	  Boot ID:                    f43244bd-8d62-45f7-a4e7-2b350386049a
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hjvpk              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 coredns-7db6d8ff4d-5shks             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 coredns-7db6d8ff4d-7vr5f             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 etcd-ha-642542                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-48qbs                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-642542             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-642542    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-72lmf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-642542             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-642542                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 62s                    kube-proxy       
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node ha-642542 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node ha-642542 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node ha-642542 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                    node-controller  Node ha-642542 event: Registered Node ha-642542 in Controller
	  Normal  NodeReady                9m57s                  kubelet          Node ha-642542 status is now: NodeReady
	  Normal  RegisteredNode           9m33s                  node-controller  Node ha-642542 event: Registered Node ha-642542 in Controller
	  Normal  RegisteredNode           8m23s                  node-controller  Node ha-642542 event: Registered Node ha-642542 in Controller
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-642542 event: Registered Node ha-642542 in Controller
	  Normal  Starting                 5m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-642542 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-642542 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m20s (x8 over 5m20s)  kubelet          Node ha-642542 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-642542 event: Registered Node ha-642542 in Controller
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-642542 event: Registered Node ha-642542 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-642542 event: Registered Node ha-642542 in Controller
	  Normal  Starting                 118s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 118s)    kubelet          Node ha-642542 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 118s)    kubelet          Node ha-642542 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 118s)    kubelet          Node ha-642542 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           67s                    node-controller  Node ha-642542 event: Registered Node ha-642542 in Controller
	  Normal  NodeNotReady             7s                     node-controller  Node ha-642542 status is now: NodeNotReady
	  Normal  RegisteredNode           2s                     node-controller  Node ha-642542 event: Registered Node ha-642542 in Controller
	
	
	Name:               ha-642542-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-642542-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9
	                    minikube.k8s.io/name=ha-642542
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T02_41_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 02:41:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-642542-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 02:51:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 02:49:56 +0000   Tue, 30 Jul 2024 02:41:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 02:49:56 +0000   Tue, 30 Jul 2024 02:41:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 02:49:56 +0000   Tue, 30 Jul 2024 02:41:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 02:49:56 +0000   Tue, 30 Jul 2024 02:42:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-642542-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 af7073f0dcae45439f61982fcd28401e
	  System UUID:                b7b8aedc-5909-49d0-ac3a-4057614d5c8c
	  Boot ID:                    f43244bd-8d62-45f7-a4e7-2b350386049a
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-csrtf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 etcd-ha-642542-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m51s
	  kube-system                 kindnet-lsdrr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m52s
	  kube-system                 kube-apiserver-ha-642542-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 kube-controller-manager-ha-642542-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 kube-proxy-bqcsg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kube-system                 kube-scheduler-ha-642542-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 kube-vip-ha-642542-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (1%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m8s                   kube-proxy       
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  Starting                 26s                    kube-proxy       
	  Normal  Starting                 4m27s                  kube-proxy       
	  Normal  NodeHasSufficientPID     9m52s (x8 over 9m52s)  kubelet          Node ha-642542-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m52s (x8 over 9m52s)  kubelet          Node ha-642542-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m52s (x8 over 9m52s)  kubelet          Node ha-642542-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m48s                  node-controller  Node ha-642542-m02 event: Registered Node ha-642542-m02 in Controller
	  Normal  RegisteredNode           9m33s                  node-controller  Node ha-642542-m02 event: Registered Node ha-642542-m02 in Controller
	  Normal  RegisteredNode           8m23s                  node-controller  Node ha-642542-m02 event: Registered Node ha-642542-m02 in Controller
	  Normal  NodeHasSufficientPID     6m33s (x8 over 6m33s)  kubelet          Node ha-642542-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    6m33s (x8 over 6m33s)  kubelet          Node ha-642542-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m33s (x8 over 6m33s)  kubelet          Node ha-642542-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-642542-m02 event: Registered Node ha-642542-m02 in Controller
	  Normal  NodeHasSufficientPID     5m18s (x8 over 5m18s)  kubelet          Node ha-642542-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-642542-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-642542-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-642542-m02 event: Registered Node ha-642542-m02 in Controller
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-642542-m02 event: Registered Node ha-642542-m02 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-642542-m02 event: Registered Node ha-642542-m02 in Controller
	  Normal  Starting                 115s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)    kubelet          Node ha-642542-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)    kubelet          Node ha-642542-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)    kubelet          Node ha-642542-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           67s                    node-controller  Node ha-642542-m02 event: Registered Node ha-642542-m02 in Controller
	  Normal  RegisteredNode           2s                     node-controller  Node ha-642542-m02 event: Registered Node ha-642542-m02 in Controller
	
	
	Name:               ha-642542-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-642542-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a9ecc7e4bd8b0211d6b42552bd8a0113828840b9
	                    minikube.k8s.io/name=ha-642542
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_30T02_43_46_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jul 2024 02:43:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-642542-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jul 2024 02:51:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jul 2024 02:50:52 +0000   Tue, 30 Jul 2024 02:50:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jul 2024 02:50:52 +0000   Tue, 30 Jul 2024 02:50:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jul 2024 02:50:52 +0000   Tue, 30 Jul 2024 02:50:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jul 2024 02:50:52 +0000   Tue, 30 Jul 2024 02:50:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-642542-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 f834516eed654a40abe3c7c96825eec6
	  System UUID:                a224730e-7637-43e7-9b2b-d2d777406fbe
	  Boot ID:                    f43244bd-8d62-45f7-a4e7-2b350386049a
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6m77b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 kindnet-bbnnt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m29s
	  kube-system                 kube-proxy-7rrfn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m26s                  kube-proxy       
	  Normal  Starting                 11s                    kube-proxy       
	  Normal  Starting                 2m56s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m29s (x2 over 7m29s)  kubelet          Node ha-642542-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m29s (x2 over 7m29s)  kubelet          Node ha-642542-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m29s (x2 over 7m29s)  kubelet          Node ha-642542-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           7m28s                  node-controller  Node ha-642542-m04 event: Registered Node ha-642542-m04 in Controller
	  Normal  RegisteredNode           7m28s                  node-controller  Node ha-642542-m04 event: Registered Node ha-642542-m04 in Controller
	  Normal  RegisteredNode           7m27s                  node-controller  Node ha-642542-m04 event: Registered Node ha-642542-m04 in Controller
	  Normal  NodeReady                7m14s                  kubelet          Node ha-642542-m04 status is now: NodeReady
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-642542-m04 event: Registered Node ha-642542-m04 in Controller
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-642542-m04 event: Registered Node ha-642542-m04 in Controller
	  Normal  NodeNotReady             3m52s                  node-controller  Node ha-642542-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-642542-m04 event: Registered Node ha-642542-m04 in Controller
	  Normal  Starting                 3m16s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-642542-m04 event: Registered Node ha-642542-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m3s (x8 over 3m15s)   kubelet          Node ha-642542-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x8 over 3m15s)   kubelet          Node ha-642542-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x8 over 3m15s)   kubelet          Node ha-642542-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           67s                    node-controller  Node ha-642542-m04 event: Registered Node ha-642542-m04 in Controller
	  Normal  Starting                 34s                    kubelet          Starting kubelet.
	  Normal  NodeNotReady             27s                    node-controller  Node ha-642542-m04 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  22s (x8 over 34s)      kubelet          Node ha-642542-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 34s)      kubelet          Node ha-642542-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x8 over 34s)      kubelet          Node ha-642542-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                     node-controller  Node ha-642542-m04 event: Registered Node ha-642542-m04 in Controller
	
	
	==> dmesg <==
	[  +0.001089] FS-Cache: O-key=[8] '99f0c90000000000'
	[  +0.000720] FS-Cache: N-cookie c=0000012b [p=00000122 fl=2 nc=0 na=1]
	[  +0.000962] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=000000007e54bbed
	[  +0.001047] FS-Cache: N-key=[8] '99f0c90000000000'
	[  +0.003607] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000125 [p=00000122 fl=226 nc=0 na=1]
	[  +0.001020] FS-Cache: O-cookie d=0000000075a7dcb2{9p.inode} n=00000000aba849d8
	[  +0.001065] FS-Cache: O-key=[8] '99f0c90000000000'
	[  +0.000717] FS-Cache: N-cookie c=0000012c [p=00000122 fl=2 nc=0 na=1]
	[  +0.000944] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=0000000055616a94
	[  +0.001052] FS-Cache: N-key=[8] '99f0c90000000000'
	[  +2.888650] FS-Cache: Duplicate cookie detected
	[  +0.000714] FS-Cache: O-cookie c=00000123 [p=00000122 fl=226 nc=0 na=1]
	[  +0.000965] FS-Cache: O-cookie d=0000000075a7dcb2{9p.inode} n=000000004d25a851
	[  +0.001205] FS-Cache: O-key=[8] '98f0c90000000000'
	[  +0.000730] FS-Cache: N-cookie c=0000012e [p=00000122 fl=2 nc=0 na=1]
	[  +0.000929] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=000000007e54bbed
	[  +0.001055] FS-Cache: N-key=[8] '98f0c90000000000'
	[  +0.356770] FS-Cache: Duplicate cookie detected
	[  +0.000713] FS-Cache: O-cookie c=00000128 [p=00000122 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=0000000075a7dcb2{9p.inode} n=000000003b71663d
	[  +0.001046] FS-Cache: O-key=[8] '9ef0c90000000000'
	[  +0.000697] FS-Cache: N-cookie c=0000012f [p=00000122 fl=2 nc=0 na=1]
	[  +0.000929] FS-Cache: N-cookie d=0000000075a7dcb2{9p.inode} n=000000008fd26398
	[  +0.001048] FS-Cache: N-key=[8] '9ef0c90000000000'
	
	
	==> etcd [3572d340b8ba0d72bee7c45ca5add8413c3ef3a717b5202fcc2dfd2dfbcaf2c8] <==
	{"level":"warn","ts":"2024-07-30T02:49:49.482114Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.554313Z","time spent":"6.927796841s","remote":"127.0.0.1:51588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.482126Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:41.981328Z","time spent":"7.500794713s","remote":"127.0.0.1:51696","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/leases/kube-system/apiserver-6g74rpm3aphfvpi3z4bhaeg3iu\" "}
	{"level":"warn","ts":"2024-07-30T02:49:49.482143Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.550113Z","time spent":"6.932021691s","remote":"127.0.0.1:51914","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.482168Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:41.80503Z","time spent":"7.677132433s","remote":"127.0.0.1:51782","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" "}
	{"level":"warn","ts":"2024-07-30T02:49:49.482182Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.788495Z","time spent":"6.693682223s","remote":"127.0.0.1:51838","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.482194Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:41.804848Z","time spent":"7.677342528s","remote":"127.0.0.1:51796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2024-07-30T02:49:49.482206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.782583Z","time spent":"6.699619645s","remote":"127.0.0.1:51888","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.482225Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.766765Z","time spent":"6.71545596s","remote":"127.0.0.1:51542","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.482435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.752845Z","time spent":"6.729580827s","remote":"127.0.0.1:51908","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.482457Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.723131Z","time spent":"6.75931854s","remote":"127.0.0.1:51546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/kube-system/\" range_end:\"/registry/configmaps/kube-system0\" limit:500 "}
	{"level":"info","ts":"2024-07-30T02:49:49.485453Z","caller":"traceutil/trace.go:171","msg":"trace[2030243986] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; }","duration":"6.714905017s","start":"2024-07-30T02:49:42.766834Z","end":"2024-07-30T02:49:49.481739Z","steps":["trace[2030243986] 'agreement among raft nodes before linearized reading'  (duration: 6.693192872s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-30T02:49:49.490156Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.766822Z","time spent":"6.723307692s","remote":"127.0.0.1:51558","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.490213Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.782744Z","time spent":"6.707453965s","remote":"127.0.0.1:51654","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.490252Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.769419Z","time spent":"6.720827859s","remote":"127.0.0.1:51662","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.49028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.777142Z","time spent":"6.713132694s","remote":"127.0.0.1:51804","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.490312Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.776117Z","time spent":"6.714177896s","remote":"127.0.0.1:51546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.490341Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.770808Z","time spent":"6.719527837s","remote":"127.0.0.1:51596","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.490372Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.770787Z","time spent":"6.719578922s","remote":"127.0.0.1:51606","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.490409Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.768237Z","time spent":"6.722166066s","remote":"127.0.0.1:51588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.490436Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.7694Z","time spent":"6.721023866s","remote":"127.0.0.1:51914","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.490471Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.769367Z","time spent":"6.721098941s","remote":"127.0.0.1:51660","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.490501Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.768072Z","time spent":"6.722423832s","remote":"127.0.0.1:51578","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.490552Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-30T02:49:42.787171Z","time spent":"6.703370963s","remote":"127.0.0.1:51822","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":0,"response size":0,"request content":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-30T02:49:49.6148Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7bf2723d4d3417e0","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-30T02:49:49.614926Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7bf2723d4d3417e0","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 02:51:15 up 1 day, 33 min,  0 users,  load average: 1.82, 2.41, 2.24
	Linux ha-642542 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [93c53cd34f2cc4d8d55c677fe4626684642df73573cb214177f9f79e00b720a2] <==
	I0730 02:50:43.481588       1 main.go:322] Node ha-642542-m02 has CIDR [10.244.1.0/24] 
	I0730 02:50:43.481751       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0730 02:50:43.481794       1 main.go:322] Node ha-642542-m04 has CIDR [10.244.3.0/24] 
	W0730 02:50:46.753039       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 02:50:46.753157       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0730 02:50:53.480648       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0730 02:50:53.480777       1 main.go:322] Node ha-642542-m04 has CIDR [10.244.3.0/24] 
	I0730 02:50:53.480942       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:50:53.480957       1 main.go:299] handling current node
	I0730 02:50:53.480970       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0730 02:50:53.480977       1 main.go:322] Node ha-642542-m02 has CIDR [10.244.1.0/24] 
	W0730 02:50:55.516175       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0730 02:50:55.516208       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0730 02:51:03.480963       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0730 02:51:03.481015       1 main.go:322] Node ha-642542-m04 has CIDR [10.244.3.0/24] 
	I0730 02:51:03.481118       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:51:03.481134       1 main.go:299] handling current node
	I0730 02:51:03.481147       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0730 02:51:03.481152       1 main.go:322] Node ha-642542-m02 has CIDR [10.244.1.0/24] 
	I0730 02:51:13.481316       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0730 02:51:13.481419       1 main.go:299] handling current node
	I0730 02:51:13.481459       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0730 02:51:13.481494       1 main.go:322] Node ha-642542-m02 has CIDR [10.244.1.0/24] 
	I0730 02:51:13.481653       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0730 02:51:13.481692       1 main.go:322] Node ha-642542-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c08ef6e89d3d904cb3a142e5f1351b59b531a5d4dcdf493b90929e1394a4f9c1] <==
	E0730 02:49:49.513952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: etcdserver: leader changed
	E0730 02:49:49.513850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: etcdserver: leader changed
	I0730 02:49:49.803916       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0730 02:49:51.297521       1 shared_informer.go:320] Caches are synced for configmaps
	I0730 02:49:51.297529       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0730 02:49:51.303681       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0730 02:49:51.404535       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0730 02:49:51.597633       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0730 02:49:52.003573       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0730 02:49:52.003608       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0730 02:49:52.362808       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 02:49:52.362931       1 policy_source.go:224] refreshing policies
	I0730 02:49:52.438227       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0730 02:49:52.438389       1 aggregator.go:165] initial CRD sync complete...
	I0730 02:49:52.438447       1 autoregister_controller.go:141] Starting autoregister controller
	I0730 02:49:52.438478       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0730 02:49:52.438514       1 cache.go:39] Caches are synced for autoregister controller
	I0730 02:49:52.542367       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0730 02:49:52.548260       1 trace.go:236] Trace[1694792483]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:23f980d0-eec3-4c1f-890c-2c21fb4677f1,client:::1,api-group:coordination.k8s.io,api-version:v1,name:apiserver-6g74rpm3aphfvpi3z4bhaeg3iu,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-6g74rpm3aphfvpi3z4bhaeg3iu,user-agent:kube-apiserver/v1.30.3 (linux/arm64) kubernetes/6fc0a69,verb:PUT (30-Jul-2024 02:49:49.916) (total time: 2632ms):
	Trace[1694792483]: ["GuaranteedUpdate etcd3" audit-id:23f980d0-eec3-4c1f-890c-2c21fb4677f1,key:/leases/kube-system/apiserver-6g74rpm3aphfvpi3z4bhaeg3iu,type:*coordination.Lease,resource:leases.coordination.k8s.io 2632ms (02:49:49.916)
	Trace[1694792483]:  ---"About to Encode" 2614ms (02:49:52.542)]
	Trace[1694792483]: [2.632121295s] [2.632121295s] END
	I0730 02:49:52.655017       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0730 02:49:52.696959       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	F0730 02:50:29.803871       1 hooks.go:203] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [e984781037ce0a5c8e02cb17ba76f6d9b83d2cf95ce325c3225386d3eafbcfb9] <==
	I0730 02:50:33.382630       1 naming_controller.go:291] Starting NamingConditionController
	I0730 02:50:33.382671       1 establishing_controller.go:76] Starting EstablishingController
	I0730 02:50:33.382710       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0730 02:50:33.382751       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0730 02:50:33.382792       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0730 02:50:33.489290       1 shared_informer.go:320] Caches are synced for configmaps
	I0730 02:50:33.489825       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0730 02:50:33.491049       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0730 02:50:33.491144       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0730 02:50:33.500365       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0730 02:50:33.500415       1 policy_source.go:224] refreshing policies
	I0730 02:50:33.500506       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0730 02:50:33.511413       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0730 02:50:33.511451       1 aggregator.go:165] initial CRD sync complete...
	I0730 02:50:33.511458       1 autoregister_controller.go:141] Starting autoregister controller
	I0730 02:50:33.511464       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0730 02:50:33.511470       1 cache.go:39] Caches are synced for autoregister controller
	I0730 02:50:33.526990       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0730 02:50:33.582920       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0730 02:50:33.586027       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0730 02:50:33.590086       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0730 02:50:34.122268       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0730 02:50:34.521727       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0730 02:50:34.523164       1 controller.go:615] quota admission added evaluator for: endpoints
	I0730 02:50:34.531801       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bb11e54814707acc8ee3e43333af75c9546e45b1b9e1218946e40340bc5fb366] <==
	I0730 02:50:16.155069       1 serving.go:380] Generated self-signed cert in-memory
	I0730 02:50:16.935121       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0730 02:50:16.935149       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 02:50:16.936713       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0730 02:50:16.936809       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0730 02:50:16.937288       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0730 02:50:16.937335       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0730 02:50:26.958896       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-controller
ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [ed1991eedbf8795f074230fd86fdbe4ca438eee531eb87a4008bd1c995e389c5] <==
	I0730 02:51:12.511176       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0730 02:51:12.516572       1 shared_informer.go:320] Caches are synced for disruption
	I0730 02:51:12.519895       1 shared_informer.go:320] Caches are synced for GC
	I0730 02:51:12.524281       1 shared_informer.go:320] Caches are synced for persistent volume
	I0730 02:51:12.530454       1 shared_informer.go:320] Caches are synced for ephemeral
	I0730 02:51:12.530603       1 shared_informer.go:320] Caches are synced for cronjob
	I0730 02:51:12.533812       1 shared_informer.go:320] Caches are synced for attach detach
	I0730 02:51:12.534319       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0730 02:51:12.536024       1 shared_informer.go:320] Caches are synced for stateful set
	I0730 02:51:12.538563       1 shared_informer.go:320] Caches are synced for endpoint
	I0730 02:51:12.540694       1 shared_informer.go:320] Caches are synced for taint
	I0730 02:51:12.540897       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0730 02:51:12.542726       1 shared_informer.go:320] Caches are synced for deployment
	I0730 02:51:12.546308       1 shared_informer.go:320] Caches are synced for PVC protection
	I0730 02:51:12.548598       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0730 02:51:12.556623       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0730 02:51:12.645989       1 shared_informer.go:320] Caches are synced for resource quota
	I0730 02:51:12.651444       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-642542"
	I0730 02:51:12.657633       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-642542-m02"
	I0730 02:51:12.657840       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-642542-m04"
	I0730 02:51:12.657890       1 shared_informer.go:320] Caches are synced for resource quota
	I0730 02:51:12.660568       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0730 02:51:13.070287       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 02:51:13.070349       1 shared_informer.go:320] Caches are synced for garbage collector
	I0730 02:51:13.070362       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6fc500b1cc841608113ea05b25be69cfb8d860d1495cea44da42de5f3a427284] <==
	I0730 02:50:12.271005       1 server_linux.go:69] "Using iptables proxy"
	I0730 02:50:12.287435       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0730 02:50:12.309406       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0730 02:50:12.309462       1 server_linux.go:165] "Using iptables Proxier"
	I0730 02:50:12.311072       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0730 02:50:12.311147       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0730 02:50:12.311203       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0730 02:50:12.311469       1 server.go:872] "Version info" version="v1.30.3"
	I0730 02:50:12.311518       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0730 02:50:12.315830       1 config.go:192] "Starting service config controller"
	I0730 02:50:12.315854       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0730 02:50:12.315874       1 config.go:101] "Starting endpoint slice config controller"
	I0730 02:50:12.315878       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0730 02:50:12.316320       1 config.go:319] "Starting node config controller"
	I0730 02:50:12.316340       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0730 02:50:12.416025       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0730 02:50:12.416029       1 shared_informer.go:320] Caches are synced for service config
	I0730 02:50:12.416388       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c2192e4c0ee968fc208e3aa253642bebf0da951d0b9adc96efc8ca61c6980a7e] <==
	E0730 02:49:49.157613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0730 02:49:49.160367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0730 02:49:49.160405       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0730 02:49:49.637598       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0730 02:49:49.637650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0730 02:49:50.689918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0730 02:49:50.689965       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0730 02:49:50.886182       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0730 02:49:50.886226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0730 02:50:04.160404       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0730 02:50:33.392942       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:37304->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.396580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:37212->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.396741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:37222->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.396830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:37182->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.396931       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:37196->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.397052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:37216->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.398028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:37240->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.401348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:37180->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.401469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:37224->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.401551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:37282->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.401658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:37268->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.402529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:37218->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.402657       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:37256->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.402751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:37284->192.168.49.2:8443: read: connection reset by peer
	E0730 02:50:33.403205       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:37298->192.168.49.2:8443: read: connection reset by peer
	
	
	==> kubelet <==
	Jul 30 02:50:27 ha-642542 kubelet[759]: I0730 02:50:27.203794     759 scope.go:117] "RemoveContainer" containerID="e6ee2da4d0408cc62308f31259e3cdafda7526f547295d67c4c0dab953cc2652"
	Jul 30 02:50:27 ha-642542 kubelet[759]: I0730 02:50:27.204100     759 scope.go:117] "RemoveContainer" containerID="bb11e54814707acc8ee3e43333af75c9546e45b1b9e1218946e40340bc5fb366"
	Jul 30 02:50:27 ha-642542 kubelet[759]: E0730 02:50:27.204598     759 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-642542_kube-system(7e44fe40789adcf7cf616bb4fb3c6bfb)\"" pod="kube-system/kube-controller-manager-ha-642542" podUID="7e44fe40789adcf7cf616bb4fb3c6bfb"
	Jul 30 02:50:30 ha-642542 kubelet[759]: I0730 02:50:30.211824     759 scope.go:117] "RemoveContainer" containerID="c08ef6e89d3d904cb3a142e5f1351b59b531a5d4dcdf493b90929e1394a4f9c1"
	Jul 30 02:50:30 ha-642542 kubelet[759]: I0730 02:50:30.212700     759 status_manager.go:853] "Failed to get status for pod" podUID="2f3935141579348b92f362580fdbbdf4" pod="kube-system/kube-apiserver-ha-642542" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-642542\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Jul 30 02:50:30 ha-642542 kubelet[759]: E0730 02:50:30.215033     759 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-642542.17e6ddc926fdda76\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-642542.17e6ddc926fdda76  kube-system   2571 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-642542,UID:2f3935141579348b92f362580fdbbdf4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.30.3\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-642542,},FirstTimestamp:2024-07-30 02:49:23 +0000 UTC,LastTimestamp:2024-07-30 02:50:30.214311411 +0000 UTC m=+73.405923230,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-642542,}"
	Jul 30 02:50:30 ha-642542 kubelet[759]: I0730 02:50:30.488594     759 scope.go:117] "RemoveContainer" containerID="bb11e54814707acc8ee3e43333af75c9546e45b1b9e1218946e40340bc5fb366"
	Jul 30 02:50:30 ha-642542 kubelet[759]: E0730 02:50:30.489164     759 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-642542_kube-system(7e44fe40789adcf7cf616bb4fb3c6bfb)\"" pod="kube-system/kube-controller-manager-ha-642542" podUID="7e44fe40789adcf7cf616bb4fb3c6bfb"
	Jul 30 02:50:33 ha-642542 kubelet[759]: E0730 02:50:33.308615     759 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:42904->192.168.49.254:8443: read: connection reset by peer
	Jul 30 02:50:33 ha-642542 kubelet[759]: E0730 02:50:33.311100     759 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:42882->192.168.49.254:8443: read: connection reset by peer
	Jul 30 02:50:33 ha-642542 kubelet[759]: E0730 02:50:33.316519     759 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:42910->192.168.49.254:8443: read: connection reset by peer
	Jul 30 02:50:33 ha-642542 kubelet[759]: E0730 02:50:33.316662     759 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:42930->192.168.49.254:8443: read: connection reset by peer
	Jul 30 02:50:33 ha-642542 kubelet[759]: I0730 02:50:33.515785     759 scope.go:117] "RemoveContainer" containerID="bb11e54814707acc8ee3e43333af75c9546e45b1b9e1218946e40340bc5fb366"
	Jul 30 02:50:33 ha-642542 kubelet[759]: E0730 02:50:33.516438     759 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-642542_kube-system(7e44fe40789adcf7cf616bb4fb3c6bfb)\"" pod="kube-system/kube-controller-manager-ha-642542" podUID="7e44fe40789adcf7cf616bb4fb3c6bfb"
	Jul 30 02:50:34 ha-642542 kubelet[759]: I0730 02:50:34.222516     759 scope.go:117] "RemoveContainer" containerID="14496513c39197eb5753d00aa7b93605809c730032e1693c5b1c3b2ff9c960a0"
	Jul 30 02:50:43 ha-642542 kubelet[759]: I0730 02:50:43.242900     759 scope.go:117] "RemoveContainer" containerID="e038a7d060d566aec4696d6f79eb5efa329a26b7822ecedf4d06ecd4075a946c"
	Jul 30 02:50:45 ha-642542 kubelet[759]: I0730 02:50:45.003727     759 scope.go:117] "RemoveContainer" containerID="bb11e54814707acc8ee3e43333af75c9546e45b1b9e1218946e40340bc5fb366"
	Jul 30 02:50:45 ha-642542 kubelet[759]: E0730 02:50:45.006674     759 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-642542_kube-system(7e44fe40789adcf7cf616bb4fb3c6bfb)\"" pod="kube-system/kube-controller-manager-ha-642542" podUID="7e44fe40789adcf7cf616bb4fb3c6bfb"
	Jul 30 02:50:45 ha-642542 kubelet[759]: E0730 02:50:45.533861     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-642542?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 30 02:50:45 ha-642542 kubelet[759]: E0730 02:50:45.555650     759 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-642542\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-642542?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 30 02:50:55 ha-642542 kubelet[759]: E0730 02:50:55.534293     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-642542?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 30 02:50:55 ha-642542 kubelet[759]: E0730 02:50:55.556750     759 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-642542\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-642542?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 30 02:51:00 ha-642542 kubelet[759]: I0730 02:51:00.013502     759 scope.go:117] "RemoveContainer" containerID="bb11e54814707acc8ee3e43333af75c9546e45b1b9e1218946e40340bc5fb366"
	Jul 30 02:51:05 ha-642542 kubelet[759]: E0730 02:51:05.535133     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-642542?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 30 02:51:05 ha-642542 kubelet[759]: E0730 02:51:05.557376     759 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-642542\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-642542?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-642542 -n ha-642542
helpers_test.go:261: (dbg) Run:  kubectl --context ha-642542 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (127.35s)

                                                
                                    

Test pass (300/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.44
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 6.78
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.07
18 TestDownloadOnly/v1.30.3/DeleteAll 0.2
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 5.39
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.54
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 213.4
40 TestAddons/serial/GCPAuth/Namespaces 0.22
42 TestAddons/parallel/Registry 15.56
44 TestAddons/parallel/InspektorGadget 11.75
48 TestAddons/parallel/CSI 65.11
49 TestAddons/parallel/Headlamp 17.65
50 TestAddons/parallel/CloudSpanner 5.59
51 TestAddons/parallel/LocalPath 52.46
52 TestAddons/parallel/NvidiaDevicePlugin 6.53
53 TestAddons/parallel/Yakd 11.99
54 TestAddons/StoppedEnableDisable 12.13
55 TestCertOptions 40.58
56 TestCertExpiration 247.3
58 TestForceSystemdFlag 46.39
59 TestForceSystemdEnv 43.5
65 TestErrorSpam/setup 29.6
66 TestErrorSpam/start 0.72
67 TestErrorSpam/status 0.99
68 TestErrorSpam/pause 1.74
69 TestErrorSpam/unpause 1.74
70 TestErrorSpam/stop 1.43
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 61.88
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 27.91
77 TestFunctional/serial/KubeContext 0.09
78 TestFunctional/serial/KubectlGetPods 0.11
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.18
82 TestFunctional/serial/CacheCmd/cache/add_local 1
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
87 TestFunctional/serial/CacheCmd/cache/delete 0.11
88 TestFunctional/serial/MinikubeKubectlCmd 0.14
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
90 TestFunctional/serial/ExtraConfig 34.37
91 TestFunctional/serial/ComponentHealth 0.11
92 TestFunctional/serial/LogsCmd 1.61
93 TestFunctional/serial/LogsFileCmd 1.73
94 TestFunctional/serial/InvalidService 3.97
96 TestFunctional/parallel/ConfigCmd 0.45
97 TestFunctional/parallel/DashboardCmd 9.1
98 TestFunctional/parallel/DryRun 0.41
99 TestFunctional/parallel/InternationalLanguage 0.18
100 TestFunctional/parallel/StatusCmd 1.05
104 TestFunctional/parallel/ServiceCmdConnect 10.79
105 TestFunctional/parallel/AddonsCmd 0.22
106 TestFunctional/parallel/PersistentVolumeClaim 26.05
108 TestFunctional/parallel/SSHCmd 0.69
109 TestFunctional/parallel/CpCmd 2.36
111 TestFunctional/parallel/FileSync 0.28
112 TestFunctional/parallel/CertSync 2.03
116 TestFunctional/parallel/NodeLabels 0.15
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
120 TestFunctional/parallel/License 0.34
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
134 TestFunctional/parallel/ProfileCmd/profile_list 0.37
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
136 TestFunctional/parallel/MountCmd/any-port 8.67
137 TestFunctional/parallel/ServiceCmd/List 0.71
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
140 TestFunctional/parallel/ServiceCmd/Format 0.47
141 TestFunctional/parallel/ServiceCmd/URL 0.45
142 TestFunctional/parallel/MountCmd/specific-port 2.79
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.87
144 TestFunctional/parallel/Version/short 0.08
145 TestFunctional/parallel/Version/components 1.32
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
150 TestFunctional/parallel/ImageCommands/ImageBuild 2.65
151 TestFunctional/parallel/ImageCommands/Setup 0.79
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
155 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.57
156 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.04
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 186.26
169 TestMultiControlPlane/serial/DeployApp 6.99
170 TestMultiControlPlane/serial/PingHostFromPods 1.62
171 TestMultiControlPlane/serial/AddWorkerNode 35.57
172 TestMultiControlPlane/serial/NodeLabels 0.1
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
174 TestMultiControlPlane/serial/CopyFile 18.53
175 TestMultiControlPlane/serial/StopSecondaryNode 12.68
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.59
177 TestMultiControlPlane/serial/RestartSecondaryNode 23.21
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.24
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 191.37
180 TestMultiControlPlane/serial/DeleteSecondaryNode 12.19
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
182 TestMultiControlPlane/serial/StopCluster 35.66
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
185 TestMultiControlPlane/serial/AddSecondaryNode 74.04
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.7
190 TestJSONOutput/start/Command 59.13
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.69
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.65
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.83
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.22
215 TestKicCustomNetwork/create_custom_network 38.11
216 TestKicCustomNetwork/use_default_bridge_network 33.78
217 TestKicExistingNetwork 33.43
218 TestKicCustomSubnet 36.44
219 TestKicStaticIP 33.69
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 68.86
224 TestMountStart/serial/StartWithMountFirst 7.28
225 TestMountStart/serial/VerifyMountFirst 0.26
226 TestMountStart/serial/StartWithMountSecond 7.03
227 TestMountStart/serial/VerifyMountSecond 0.26
228 TestMountStart/serial/DeleteFirst 1.6
229 TestMountStart/serial/VerifyMountPostDelete 0.25
230 TestMountStart/serial/Stop 1.19
231 TestMountStart/serial/RestartStopped 7.83
232 TestMountStart/serial/VerifyMountPostStop 0.26
235 TestMultiNode/serial/FreshStart2Nodes 86.93
236 TestMultiNode/serial/DeployApp2Nodes 5.09
237 TestMultiNode/serial/PingHostFrom2Pods 1.16
238 TestMultiNode/serial/AddNode 31.4
239 TestMultiNode/serial/MultiNodeLabels 0.09
240 TestMultiNode/serial/ProfileList 0.33
241 TestMultiNode/serial/CopyFile 9.99
242 TestMultiNode/serial/StopNode 2.28
243 TestMultiNode/serial/StartAfterStop 10
244 TestMultiNode/serial/RestartKeepsNodes 86.59
245 TestMultiNode/serial/DeleteNode 5.33
246 TestMultiNode/serial/StopMultiNode 23.84
247 TestMultiNode/serial/RestartMultiNode 55.86
248 TestMultiNode/serial/ValidateNameConflict 32.43
253 TestPreload 124.5
255 TestScheduledStopUnix 109.34
258 TestInsufficientStorage 10.77
259 TestRunningBinaryUpgrade 80.71
261 TestKubernetesUpgrade 393.28
262 TestMissingContainerUpgrade 101.32
264 TestPause/serial/Start 67.72
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
267 TestNoKubernetes/serial/StartWithK8s 43
268 TestNoKubernetes/serial/StartWithStopK8s 17.5
269 TestNoKubernetes/serial/Start 6.5
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
271 TestNoKubernetes/serial/ProfileList 1.12
272 TestPause/serial/SecondStartNoReconfiguration 30.93
273 TestNoKubernetes/serial/Stop 1.26
274 TestNoKubernetes/serial/StartNoArgs 7.7
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
283 TestNetworkPlugins/group/false 3.57
287 TestPause/serial/Pause 1.37
288 TestPause/serial/VerifyStatus 0.41
289 TestPause/serial/Unpause 0.69
290 TestPause/serial/PauseAgain 0.85
291 TestPause/serial/DeletePaused 2.76
292 TestPause/serial/VerifyDeletedResources 0.42
293 TestStoppedBinaryUpgrade/Setup 0.72
294 TestStoppedBinaryUpgrade/Upgrade 95.12
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
303 TestNetworkPlugins/group/auto/Start 61.62
304 TestNetworkPlugins/group/auto/KubeletFlags 0.33
305 TestNetworkPlugins/group/auto/NetCatPod 10.42
306 TestNetworkPlugins/group/auto/DNS 0.22
307 TestNetworkPlugins/group/auto/Localhost 0.16
308 TestNetworkPlugins/group/auto/HairPin 0.2
309 TestNetworkPlugins/group/kindnet/Start 61.39
310 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
312 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
313 TestNetworkPlugins/group/kindnet/DNS 0.18
314 TestNetworkPlugins/group/kindnet/Localhost 0.15
315 TestNetworkPlugins/group/kindnet/HairPin 0.14
316 TestNetworkPlugins/group/calico/Start 70.53
317 TestNetworkPlugins/group/calico/ControllerPod 6.01
318 TestNetworkPlugins/group/calico/KubeletFlags 0.3
319 TestNetworkPlugins/group/calico/NetCatPod 11.3
320 TestNetworkPlugins/group/calico/DNS 0.18
321 TestNetworkPlugins/group/calico/Localhost 0.16
322 TestNetworkPlugins/group/calico/HairPin 0.16
323 TestNetworkPlugins/group/custom-flannel/Start 77.62
324 TestNetworkPlugins/group/enable-default-cni/Start 81.23
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
327 TestNetworkPlugins/group/custom-flannel/DNS 0.2
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
335 TestNetworkPlugins/group/flannel/Start 69.16
336 TestNetworkPlugins/group/bridge/Start 89.64
337 TestNetworkPlugins/group/flannel/ControllerPod 6.01
338 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
339 TestNetworkPlugins/group/flannel/NetCatPod 11.26
340 TestNetworkPlugins/group/flannel/DNS 0.19
341 TestNetworkPlugins/group/flannel/Localhost 0.15
342 TestNetworkPlugins/group/flannel/HairPin 0.16
343 TestNetworkPlugins/group/bridge/KubeletFlags 0.47
344 TestNetworkPlugins/group/bridge/NetCatPod 12.41
346 TestStartStop/group/old-k8s-version/serial/FirstStart 174.76
347 TestNetworkPlugins/group/bridge/DNS 0.23
348 TestNetworkPlugins/group/bridge/Localhost 0.3
349 TestNetworkPlugins/group/bridge/HairPin 0.4
351 TestStartStop/group/no-preload/serial/FirstStart 69.37
352 TestStartStop/group/no-preload/serial/DeployApp 8.33
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
354 TestStartStop/group/no-preload/serial/Stop 12.01
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
356 TestStartStop/group/no-preload/serial/SecondStart 266.89
357 TestStartStop/group/old-k8s-version/serial/DeployApp 9.54
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
359 TestStartStop/group/old-k8s-version/serial/Stop 12.08
360 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
361 TestStartStop/group/old-k8s-version/serial/SecondStart 130.73
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
364 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
365 TestStartStop/group/old-k8s-version/serial/Pause 3.12
367 TestStartStop/group/embed-certs/serial/FirstStart 61.63
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
371 TestStartStop/group/no-preload/serial/Pause 3.16
373 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.06
374 TestStartStop/group/embed-certs/serial/DeployApp 8.46
375 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.39
376 TestStartStop/group/embed-certs/serial/Stop 12.17
377 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
378 TestStartStop/group/embed-certs/serial/SecondStart 279.68
379 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.42
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
381 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
383 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 289.75
384 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.02
385 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
387 TestStartStop/group/embed-certs/serial/Pause 3.13
389 TestStartStop/group/newest-cni/serial/FirstStart 37.75
390 TestStartStop/group/newest-cni/serial/DeployApp 0
391 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
392 TestStartStop/group/newest-cni/serial/Stop 1.3
393 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
394 TestStartStop/group/newest-cni/serial/SecondStart 16.53
395 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
397 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
398 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
399 TestStartStop/group/newest-cni/serial/Pause 3.02
400 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
401 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
402 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.85
x
+
TestDownloadOnly/v1.20.0/json-events (7.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-113597 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-113597 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.440497543s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-113597
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-113597: exit status 85 (73.115953ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-113597 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |          |
	|         | -p download-only-113597        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 02:25:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 02:25:10.901157 1597963 out.go:291] Setting OutFile to fd 1 ...
	I0730 02:25:10.901377 1597963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:25:10.901406 1597963 out.go:304] Setting ErrFile to fd 2...
	I0730 02:25:10.901428 1597963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:25:10.901701 1597963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	W0730 02:25:10.901879 1597963 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19348-1592571/.minikube/config/config.json: open /home/jenkins/minikube-integration/19348-1592571/.minikube/config/config.json: no such file or directory
	I0730 02:25:10.902311 1597963 out.go:298] Setting JSON to true
	I0730 02:25:10.903212 1597963 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":86857,"bootTime":1722219454,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0730 02:25:10.903314 1597963 start.go:139] virtualization:  
	I0730 02:25:10.906586 1597963 out.go:97] [download-only-113597] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0730 02:25:10.906766 1597963 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball: no such file or directory
	I0730 02:25:10.906880 1597963 notify.go:220] Checking for updates...
	I0730 02:25:10.909985 1597963 out.go:169] MINIKUBE_LOCATION=19348
	I0730 02:25:10.912168 1597963 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 02:25:10.914364 1597963 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:25:10.916682 1597963 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	I0730 02:25:10.918963 1597963 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0730 02:25:10.923759 1597963 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0730 02:25:10.924058 1597963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 02:25:10.949108 1597963 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0730 02:25:10.949245 1597963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:25:11.006720 1597963 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2024-07-30 02:25:10.994689388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:25:11.006847 1597963 docker.go:307] overlay module found
	I0730 02:25:11.009507 1597963 out.go:97] Using the docker driver based on user configuration
	I0730 02:25:11.009555 1597963 start.go:297] selected driver: docker
	I0730 02:25:11.009564 1597963 start.go:901] validating driver "docker" against <nil>
	I0730 02:25:11.009694 1597963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:25:11.073655 1597963 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2024-07-30 02:25:11.064038245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:25:11.073836 1597963 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 02:25:11.074140 1597963 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0730 02:25:11.074315 1597963 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0730 02:25:11.077160 1597963 out.go:169] Using Docker driver with root privileges
	I0730 02:25:11.079807 1597963 cni.go:84] Creating CNI manager for ""
	I0730 02:25:11.079831 1597963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0730 02:25:11.079843 1597963 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0730 02:25:11.079951 1597963 start.go:340] cluster config:
	{Name:download-only-113597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-113597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:25:11.082691 1597963 out.go:97] Starting "download-only-113597" primary control-plane node in "download-only-113597" cluster
	I0730 02:25:11.082715 1597963 cache.go:121] Beginning downloading kic base image for docker with crio
	I0730 02:25:11.085290 1597963 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0730 02:25:11.085322 1597963 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0730 02:25:11.085495 1597963 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0730 02:25:11.102085 1597963 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0730 02:25:11.102288 1597963 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0730 02:25:11.102401 1597963 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0730 02:25:11.141677 1597963 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0730 02:25:11.141707 1597963 cache.go:56] Caching tarball of preloaded images
	I0730 02:25:11.141869 1597963 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0730 02:25:11.144858 1597963 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0730 02:25:11.144931 1597963 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0730 02:25:11.231112 1597963 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0730 02:25:14.628635 1597963 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	
	
	* The control-plane node download-only-113597 host does not exist
	  To start a cluster, run: "minikube start -p download-only-113597"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-113597
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (6.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-171878 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-171878 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.782739484s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (6.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-171878
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-171878: exit status 85 (70.540903ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-113597 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | -p download-only-113597        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| delete  | -p download-only-113597        | download-only-113597 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| start   | -o=json --download-only        | download-only-171878 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | -p download-only-171878        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 02:25:18
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 02:25:18.748124 1598172 out.go:291] Setting OutFile to fd 1 ...
	I0730 02:25:18.748772 1598172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:25:18.748824 1598172 out.go:304] Setting ErrFile to fd 2...
	I0730 02:25:18.748846 1598172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:25:18.749143 1598172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 02:25:18.749616 1598172 out.go:298] Setting JSON to true
	I0730 02:25:18.750557 1598172 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":86865,"bootTime":1722219454,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0730 02:25:18.750654 1598172 start.go:139] virtualization:  
	I0730 02:25:18.753099 1598172 out.go:97] [download-only-171878] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0730 02:25:18.753416 1598172 notify.go:220] Checking for updates...
	I0730 02:25:18.755637 1598172 out.go:169] MINIKUBE_LOCATION=19348
	I0730 02:25:18.757867 1598172 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 02:25:18.759611 1598172 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:25:18.761496 1598172 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	I0730 02:25:18.763376 1598172 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0730 02:25:18.766670 1598172 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0730 02:25:18.766989 1598172 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 02:25:18.793564 1598172 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0730 02:25:18.793665 1598172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:25:18.859180 1598172 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-30 02:25:18.849904596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:25:18.859300 1598172 docker.go:307] overlay module found
	I0730 02:25:18.861205 1598172 out.go:97] Using the docker driver based on user configuration
	I0730 02:25:18.861234 1598172 start.go:297] selected driver: docker
	I0730 02:25:18.861241 1598172 start.go:901] validating driver "docker" against <nil>
	I0730 02:25:18.861368 1598172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:25:18.917383 1598172 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-30 02:25:18.907050193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:25:18.917544 1598172 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 02:25:18.917835 1598172 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0730 02:25:18.917991 1598172 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0730 02:25:18.920053 1598172 out.go:169] Using Docker driver with root privileges
	I0730 02:25:18.921937 1598172 cni.go:84] Creating CNI manager for ""
	I0730 02:25:18.921957 1598172 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0730 02:25:18.921969 1598172 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0730 02:25:18.922055 1598172 start.go:340] cluster config:
	{Name:download-only-171878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-171878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:25:18.923713 1598172 out.go:97] Starting "download-only-171878" primary control-plane node in "download-only-171878" cluster
	I0730 02:25:18.923733 1598172 cache.go:121] Beginning downloading kic base image for docker with crio
	I0730 02:25:18.925461 1598172 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0730 02:25:18.925489 1598172 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:25:18.925653 1598172 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0730 02:25:18.942963 1598172 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0730 02:25:18.943165 1598172 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0730 02:25:18.943206 1598172 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0730 02:25:18.943218 1598172 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0730 02:25:18.943227 1598172 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0730 02:25:18.987497 1598172 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0730 02:25:18.987536 1598172 cache.go:56] Caching tarball of preloaded images
	I0730 02:25:18.987742 1598172 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:25:18.989901 1598172 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0730 02:25:18.989933 1598172 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 ...
	I0730 02:25:19.079707 1598172 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:bace9a3612be7d31e4d3c3d446951ced -> /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0730 02:25:22.478816 1598172 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 ...
	I0730 02:25:22.478926 1598172 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 ...
	I0730 02:25:23.361563 1598172 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0730 02:25:23.361919 1598172 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/download-only-171878/config.json ...
	I0730 02:25:23.361953 1598172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/download-only-171878/config.json: {Name:mk7ce5d8ce7aa981dd8c6bb669903a28f67766ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:25:23.362702 1598172 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0730 02:25:23.363439 1598172 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/linux/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-171878 host does not exist
	  To start a cluster, run: "minikube start -p download-only-171878"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-171878
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (5.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-888216 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-888216 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.391217184s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (5.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-888216
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-888216: exit status 85 (71.125144ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-113597 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | -p download-only-113597             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| delete  | -p download-only-113597             | download-only-113597 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| start   | -o=json --download-only             | download-only-171878 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | -p download-only-171878             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| delete  | -p download-only-171878             | download-only-171878 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC | 30 Jul 24 02:25 UTC |
	| start   | -o=json --download-only             | download-only-888216 | jenkins | v1.33.1 | 30 Jul 24 02:25 UTC |                     |
	|         | -p download-only-888216             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/30 02:25:25
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0730 02:25:25.933585 1598369 out.go:291] Setting OutFile to fd 1 ...
	I0730 02:25:25.933812 1598369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:25:25.933845 1598369 out.go:304] Setting ErrFile to fd 2...
	I0730 02:25:25.933868 1598369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:25:25.934154 1598369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 02:25:25.934660 1598369 out.go:298] Setting JSON to true
	I0730 02:25:25.935616 1598369 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":86872,"bootTime":1722219454,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0730 02:25:25.935724 1598369 start.go:139] virtualization:  
	I0730 02:25:25.938001 1598369 out.go:97] [download-only-888216] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0730 02:25:25.938237 1598369 notify.go:220] Checking for updates...
	I0730 02:25:25.940077 1598369 out.go:169] MINIKUBE_LOCATION=19348
	I0730 02:25:25.942008 1598369 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 02:25:25.943988 1598369 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:25:25.945569 1598369 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	I0730 02:25:25.947224 1598369 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0730 02:25:25.950267 1598369 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0730 02:25:25.950529 1598369 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 02:25:25.980384 1598369 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0730 02:25:25.980495 1598369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:25:26.040642 1598369 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-30 02:25:26.030445759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:25:26.040768 1598369 docker.go:307] overlay module found
	I0730 02:25:26.042761 1598369 out.go:97] Using the docker driver based on user configuration
	I0730 02:25:26.042790 1598369 start.go:297] selected driver: docker
	I0730 02:25:26.042797 1598369 start.go:901] validating driver "docker" against <nil>
	I0730 02:25:26.042921 1598369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:25:26.096784 1598369 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-30 02:25:26.087546614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:25:26.096960 1598369 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0730 02:25:26.097298 1598369 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0730 02:25:26.097461 1598369 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0730 02:25:26.099651 1598369 out.go:169] Using Docker driver with root privileges
	I0730 02:25:26.101428 1598369 cni.go:84] Creating CNI manager for ""
	I0730 02:25:26.101453 1598369 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0730 02:25:26.101466 1598369 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0730 02:25:26.101557 1598369 start.go:340] cluster config:
	{Name:download-only-888216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-888216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:25:26.103353 1598369 out.go:97] Starting "download-only-888216" primary control-plane node in "download-only-888216" cluster
	I0730 02:25:26.103374 1598369 cache.go:121] Beginning downloading kic base image for docker with crio
	I0730 02:25:26.105167 1598369 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0730 02:25:26.105199 1598369 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0730 02:25:26.105250 1598369 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0730 02:25:26.121219 1598369 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0730 02:25:26.121358 1598369 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0730 02:25:26.121393 1598369 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0730 02:25:26.121404 1598369 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0730 02:25:26.121413 1598369 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0730 02:25:26.163107 1598369 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0730 02:25:26.163133 1598369 cache.go:56] Caching tarball of preloaded images
	I0730 02:25:26.163968 1598369 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0730 02:25:26.166006 1598369 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0730 02:25:26.166031 1598369 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0730 02:25:26.255417 1598369 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:70b5971c257ae4defe1f5d041a04e29c -> /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0730 02:25:29.438019 1598369 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0730 02:25:29.438195 1598369 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0730 02:25:30.285166 1598369 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0730 02:25:30.285570 1598369 profile.go:143] Saving config to /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/download-only-888216/config.json ...
	I0730 02:25:30.285606 1598369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/download-only-888216/config.json: {Name:mkdd0ba84aaa55e4f717d72525c28d0fe44b8da6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0730 02:25:30.286497 1598369 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0730 02:25:30.286695 1598369 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19348-1592571/.minikube/cache/linux/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-888216 host does not exist
	  To start a cluster, run: "minikube start -p download-only-888216"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-888216
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-628859 --alsologtostderr --binary-mirror http://127.0.0.1:42821 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-628859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-628859
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-261813
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-261813: exit status 85 (70.872785ms)

                                                
                                                
-- stdout --
	* Profile "addons-261813" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-261813"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-261813
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-261813: exit status 85 (65.449215ms)

                                                
                                                
-- stdout --
	* Profile "addons-261813" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-261813"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (213.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-261813 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-261813 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m33.39822304s)
--- PASS: TestAddons/Setup (213.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-261813 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-261813 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 8.863869ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-hmxpq" [831bfd95-6ae5-4eae-883c-71619d8c8922] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004897164s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-c2b4j" [8c7f77e7-adf9-4a3f-8a9a-0e7e917e1a2f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004807216s
addons_test.go:342: (dbg) Run:  kubectl --context addons-261813 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-261813 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-261813 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.647420151s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 ip
2024/07/30 02:29:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.56s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vq66g" [12364633-2bd5-41bf-a673-8ffc6fc19012] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003963986s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-261813
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-261813: (5.740044979s)
--- PASS: TestAddons/parallel/InspektorGadget (11.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 10.111676ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-261813 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-261813 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [95fc6ca8-14d5-4a31-bb20-3892f735760a] Pending
helpers_test.go:344: "task-pv-pod" [95fc6ca8-14d5-4a31-bb20-3892f735760a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [95fc6ca8-14d5-4a31-bb20-3892f735760a] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003210225s
addons_test.go:590: (dbg) Run:  kubectl --context addons-261813 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-261813 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-261813 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-261813 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-261813 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-261813 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-261813 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [db9d2756-d1cc-4450-87f5-2a44f0f80938] Pending
helpers_test.go:344: "task-pv-pod-restore" [db9d2756-d1cc-4450-87f5-2a44f0f80938] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [db9d2756-d1cc-4450-87f5-2a44f0f80938] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004141385s
addons_test.go:632: (dbg) Run:  kubectl --context addons-261813 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-261813 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-261813 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-261813 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.740935785s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-261813 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-94qh8" [a4cdb3f1-fb9a-458b-9fc2-5e79fd0b3558] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-94qh8" [a4cdb3f1-fb9a-458b-9fc2-5e79fd0b3558] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-94qh8" [a4cdb3f1-fb9a-458b-9fc2-5e79fd0b3558] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003942265s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-261813 addons disable headlamp --alsologtostderr -v=1: (5.704798654s)
--- PASS: TestAddons/parallel/Headlamp (17.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-9qnr4" [f4627c8e-75fa-4348-9687-1f26559d8bad] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006676544s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-261813
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-261813 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-261813 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-261813 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [eaab39d0-b31e-47d5-ae44-705c798c8d52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [eaab39d0-b31e-47d5-ae44-705c798c8d52] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [eaab39d0-b31e-47d5-ae44-705c798c8d52] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003723967s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-261813 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 ssh "cat /opt/local-path-provisioner/pvc-c44108cc-c5e9-43dd-8069-916608c7b030_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-261813 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-261813 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-261813 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.349267326s)
--- PASS: TestAddons/parallel/LocalPath (52.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zrzpl" [73510050-2ea7-49cd-bf93-d1b56047d84f] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005076564s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-261813
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-49s9f" [d8288162-7f0c-41d4-aedc-5795ac15204d] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005491198s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-261813 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-261813 addons disable yakd --alsologtostderr -v=1: (5.979390735s)
--- PASS: TestAddons/parallel/Yakd (11.99s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-261813
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-261813: (11.849277596s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-261813
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-261813
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-261813
--- PASS: TestAddons/StoppedEnableDisable (12.13s)

                                                
                                    
x
+
TestCertOptions (40.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-339955 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-339955 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.913797472s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-339955 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-339955 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-339955 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-339955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-339955
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-339955: (1.982068619s)
--- PASS: TestCertOptions (40.58s)

                                                
                                    
x
+
TestCertExpiration (247.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-375229 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-375229 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.750951218s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-375229 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-375229 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (24.902067448s)
helpers_test.go:175: Cleaning up "cert-expiration-375229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-375229
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-375229: (2.643328637s)
--- PASS: TestCertExpiration (247.30s)

                                                
                                    
x
+
TestForceSystemdFlag (46.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-010747 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-010747 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.48199424s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-010747 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-010747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-010747
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-010747: (2.486031047s)
--- PASS: TestForceSystemdFlag (46.39s)

                                                
                                    
x
+
TestForceSystemdEnv (43.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-834816 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-834816 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.886811568s)
helpers_test.go:175: Cleaning up "force-systemd-env-834816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-834816
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-834816: (2.608483886s)
--- PASS: TestForceSystemdEnv (43.50s)

                                                
                                    
x
+
TestErrorSpam/setup (29.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-691091 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-691091 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-691091 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-691091 --driver=docker  --container-runtime=crio: (29.602484204s)
--- PASS: TestErrorSpam/setup (29.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 stop: (1.237777699s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-691091 --log_dir /tmp/nospam-691091 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19348-1592571/.minikube/files/etc/test/nested/copy/1597958/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-359379 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-359379 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m1.868930976s)
--- PASS: TestFunctional/serial/StartWithProxy (61.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-359379 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-359379 --alsologtostderr -v=8: (27.905259371s)
functional_test.go:659: soft start took 27.908406254s for "functional-359379" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-359379 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-359379 cache add registry.k8s.io/pause:3.1: (1.474517779s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-359379 cache add registry.k8s.io/pause:3.3: (1.458205119s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-359379 cache add registry.k8s.io/pause:latest: (1.244963451s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-359379 /tmp/TestFunctionalserialCacheCmdcacheadd_local73518841/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 cache add minikube-local-cache-test:functional-359379
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 cache delete minikube-local-cache-test:functional-359379
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-359379
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-359379 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (315.091367ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-359379 cache reload: (1.168956748s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 kubectl -- --context functional-359379 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-359379 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-359379 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0730 02:39:07.151222 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:39:07.156996 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:39:07.167242 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:39:07.187500 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:39:07.227704 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:39:07.308004 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:39:07.468435 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:39:07.789014 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:39:08.429764 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:39:09.710021 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:39:12.270675 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-359379 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.374573196s)
functional_test.go:757: restart took 34.374674921s for "functional-359379" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-359379 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-359379 logs: (1.611684012s)
--- PASS: TestFunctional/serial/LogsCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 logs --file /tmp/TestFunctionalserialLogsFileCmd2742920656/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-359379 logs --file /tmp/TestFunctionalserialLogsFileCmd2742920656/001/logs.txt: (1.727556041s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-359379 apply -f testdata/invalidsvc.yaml
E0730 02:39:17.390850 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-359379
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-359379: exit status 115 (383.999731ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31467 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-359379 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-359379 config get cpus: exit status 14 (70.85193ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-359379 config get cpus: exit status 14 (66.622708ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-359379 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-359379 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1625499: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-359379 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-359379 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (163.177518ms)

                                                
                                                
-- stdout --
	* [functional-359379] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 02:39:53.553058 1625208 out.go:291] Setting OutFile to fd 1 ...
	I0730 02:39:53.553292 1625208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:39:53.553323 1625208 out.go:304] Setting ErrFile to fd 2...
	I0730 02:39:53.553350 1625208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:39:53.553597 1625208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 02:39:53.553989 1625208 out.go:298] Setting JSON to false
	I0730 02:39:53.554935 1625208 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":87739,"bootTime":1722219454,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0730 02:39:53.555036 1625208 start.go:139] virtualization:  
	I0730 02:39:53.557522 1625208 out.go:177] * [functional-359379] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0730 02:39:53.559895 1625208 out.go:177]   - MINIKUBE_LOCATION=19348
	I0730 02:39:53.560052 1625208 notify.go:220] Checking for updates...
	I0730 02:39:53.564015 1625208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 02:39:53.566209 1625208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:39:53.568723 1625208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	I0730 02:39:53.570899 1625208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0730 02:39:53.573576 1625208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 02:39:53.575822 1625208 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:39:53.576472 1625208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 02:39:53.598662 1625208 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0730 02:39:53.598815 1625208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:39:53.650038 1625208 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-30 02:39:53.640105194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:39:53.650148 1625208 docker.go:307] overlay module found
	I0730 02:39:53.653462 1625208 out.go:177] * Using the docker driver based on existing profile
	I0730 02:39:53.655372 1625208 start.go:297] selected driver: docker
	I0730 02:39:53.655391 1625208 start.go:901] validating driver "docker" against &{Name:functional-359379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-359379 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:39:53.655494 1625208 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 02:39:53.658403 1625208 out.go:177] 
	W0730 02:39:53.660652 1625208 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0730 02:39:53.662560 1625208 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-359379 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-359379 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-359379 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (178.428021ms)

                                                
                                                
-- stdout --
	* [functional-359379] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 02:39:53.374444 1625161 out.go:291] Setting OutFile to fd 1 ...
	I0730 02:39:53.374634 1625161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:39:53.374644 1625161 out.go:304] Setting ErrFile to fd 2...
	I0730 02:39:53.374650 1625161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:39:53.375678 1625161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 02:39:53.376118 1625161 out.go:298] Setting JSON to false
	I0730 02:39:53.377067 1625161 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":87739,"bootTime":1722219454,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0730 02:39:53.377143 1625161 start.go:139] virtualization:  
	I0730 02:39:53.379672 1625161 out.go:177] * [functional-359379] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0730 02:39:53.381914 1625161 out.go:177]   - MINIKUBE_LOCATION=19348
	I0730 02:39:53.381951 1625161 notify.go:220] Checking for updates...
	I0730 02:39:53.386007 1625161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 02:39:53.388843 1625161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 02:39:53.392157 1625161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	I0730 02:39:53.394758 1625161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0730 02:39:53.398458 1625161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 02:39:53.401305 1625161 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:39:53.401948 1625161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 02:39:53.429876 1625161 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0730 02:39:53.429993 1625161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:39:53.485711 1625161 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-30 02:39:53.476111005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:39:53.485819 1625161 docker.go:307] overlay module found
	I0730 02:39:53.488118 1625161 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0730 02:39:53.489874 1625161 start.go:297] selected driver: docker
	I0730 02:39:53.489891 1625161 start.go:901] validating driver "docker" against &{Name:functional-359379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-359379 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0730 02:39:53.490016 1625161 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 02:39:53.492229 1625161 out.go:177] 
	W0730 02:39:53.494062 1625161 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0730 02:39:53.496062 1625161 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-359379 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-359379 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-t4n7p" [b30f4c8c-28cc-4779-a463-290b4cd3ab01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-t4n7p" [b30f4c8c-28cc-4779-a463-290b4cd3ab01] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004619006s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31595
functional_test.go:1671: http://192.168.49.2:31595: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-t4n7p

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31595
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1271f869-913c-457e-862a-8d9717e679f8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004108023s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-359379 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-359379 apply -f testdata/storage-provisioner/pvc.yaml
E0730 02:39:27.631441 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-359379 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-359379 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2285e9f3-a3e1-46ae-bcc8-ed6b21253055] Pending
helpers_test.go:344: "sp-pod" [2285e9f3-a3e1-46ae-bcc8-ed6b21253055] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2285e9f3-a3e1-46ae-bcc8-ed6b21253055] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003544898s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-359379 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-359379 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-359379 delete -f testdata/storage-provisioner/pod.yaml: (1.097180728s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-359379 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [37d5fe01-5182-4b44-b978-24ee4a1fe603] Pending
helpers_test.go:344: "sp-pod" [37d5fe01-5182-4b44-b978-24ee4a1fe603] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004188967s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-359379 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh -n functional-359379 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 cp functional-359379:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1953051186/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh -n functional-359379 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh -n functional-359379 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1597958/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo cat /etc/test/nested/copy/1597958/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1597958.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo cat /etc/ssl/certs/1597958.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1597958.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo cat /usr/share/ca-certificates/1597958.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15979582.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo cat /etc/ssl/certs/15979582.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15979582.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo cat /usr/share/ca-certificates/15979582.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-359379 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-359379 ssh "sudo systemctl is-active docker": exit status 1 (401.440501ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-359379 ssh "sudo systemctl is-active containerd": exit status 1 (319.293753ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-359379 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-359379 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-359379 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-359379 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1622912: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-359379 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-359379 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [425b8950-ced0-41f6-b162-11b9634c15e8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [425b8950-ced0-41f6-b162-11b9634c15e8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003794875s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-359379 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.8.224 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-359379 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-359379 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-359379 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-bvhck" [e9eba75c-8231-4d3a-b920-f956588d4496] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-bvhck" [e9eba75c-8231-4d3a-b920-f956588d4496] Running
E0730 02:39:48.112323 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005440851s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "320.526809ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "52.163464ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "322.128086ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "54.542569ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdany-port2642448207/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722307189639553230" to /tmp/TestFunctionalparallelMountCmdany-port2642448207/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722307189639553230" to /tmp/TestFunctionalparallelMountCmdany-port2642448207/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722307189639553230" to /tmp/TestFunctionalparallelMountCmdany-port2642448207/001/test-1722307189639553230
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (452.316954ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 30 02:39 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 30 02:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 30 02:39 test-1722307189639553230
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh cat /mount-9p/test-1722307189639553230
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-359379 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e0ff9b86-8d6d-4b3e-9916-03c865d5de00] Pending
helpers_test.go:344: "busybox-mount" [e0ff9b86-8d6d-4b3e-9916-03c865d5de00] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e0ff9b86-8d6d-4b3e-9916-03c865d5de00] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e0ff9b86-8d6d-4b3e-9916-03c865d5de00] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005599381s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-359379 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdany-port2642448207/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 service list -o json
functional_test.go:1490: Took "488.462406ms" to run "out/minikube-linux-arm64 -p functional-359379 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30171
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30171
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdspecific-port1230810444/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (530.330978ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdspecific-port1230810444/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-359379 ssh "sudo umount -f /mount-9p": exit status 1 (470.05943ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-359379 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdspecific-port1230810444/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848417828/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848417828/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848417828/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T" /mount1: exit status 1 (1.039252352s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T" /mount1
2024/07/30 02:40:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-359379 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848417828/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848417828/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-359379 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848417828/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-359379 version -o=json --components: (1.320728165s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-359379 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240719-e7903573
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-359379
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-359379 image ls --format short --alsologtostderr:
I0730 02:40:11.199191 1628008 out.go:291] Setting OutFile to fd 1 ...
I0730 02:40:11.199441 1628008 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 02:40:11.199457 1628008 out.go:304] Setting ErrFile to fd 2...
I0730 02:40:11.199464 1628008 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 02:40:11.199785 1628008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
I0730 02:40:11.200709 1628008 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 02:40:11.200951 1628008 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 02:40:11.201577 1628008 cli_runner.go:164] Run: docker container inspect functional-359379 --format={{.State.Status}}
I0730 02:40:11.225249 1628008 ssh_runner.go:195] Run: systemctl --version
I0730 02:40:11.225301 1628008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-359379
I0730 02:40:11.249657 1628008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38893 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/functional-359379/id_rsa Username:docker}
I0730 02:40:11.349556 1628008 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-359379 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kicbase/echo-server           | functional-359379  | ce2d2cda2d858 | 4.79MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5e32961ddcea3 | 90.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| docker.io/library/nginx                 | latest             | 43b17fe33c4b4 | 197MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 2351f570ed0ea | 89.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 61773190d42ff | 114MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | d48f992a22722 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20240719-e7903573 | f42786f8afd22 | 90.3MB |
| docker.io/library/nginx                 | alpine             | d7cd33d7d4ed1 | 46.7MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 8e97cdb19e7cc | 108MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-359379 image ls --format table --alsologtostderr:
I0730 02:40:11.488077 1628072 out.go:291] Setting OutFile to fd 1 ...
I0730 02:40:11.488270 1628072 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 02:40:11.488303 1628072 out.go:304] Setting ErrFile to fd 2...
I0730 02:40:11.488334 1628072 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 02:40:11.488716 1628072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
I0730 02:40:11.489975 1628072 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 02:40:11.490111 1628072 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 02:40:11.490648 1628072 cli_runner.go:164] Run: docker container inspect functional-359379 --format={{.State.Status}}
I0730 02:40:11.521727 1628072 ssh_runner.go:195] Run: systemctl --version
I0730 02:40:11.521784 1628072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-359379
I0730 02:40:11.540555 1628072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38893 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/functional-359379/id_rsa Username:docker}
I0730 02:40:11.634052 1628072 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-359379 image ls --format json --alsologtostderr:
[{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad234
10e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800","repoDigests":["docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a","docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"],"repoTags":["docker.io/kindest/kindnetd:v20240719-e7903573"],"size":"90281007"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9","docker.io/library/nginx@sha256:37d07a7f2aef3a0cc9ca4aafd9331c0796e47536c06a1f7304f98d69816baed7"],"repoTags":["docker.io/library/nginx:alpine"],"size":"46671358"},{"id":"1611cd07b
61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":["registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"89199511"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4","registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95"],"repoTags":["r
egistry.k8s.io/kube-scheduler:v1.30.3"],"size":"61568326"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:functional-359379"],"size":"4788229"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":["docker.io/library/nginx@sha256:2732a234518030d4fd7a4562515a42d05d93a99faba1c2b07c68e0eeaa9ee65c","docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c"],"repoTags":["docker.io/library/nginx:latest"],"size":"197104786"},{
"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":["registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"113538528"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499","registry.k8s.io/kube-controller-manager@sha256:eff43da5
5a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"108229958"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493","docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"90278450"},{"id":"a422e0e982356f6c1c
f0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-359379 image ls --format json --alsologtostderr:
I0730 02:40:11.494432 1628071 out.go:291] Setting OutFile to fd 1 ...
I0730 02:40:11.494609 1628071 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 02:40:11.494622 1628071 out.go:304] Setting ErrFile to fd 2...
I0730 02:40:11.494628 1628071 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 02:40:11.494889 1628071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
I0730 02:40:11.495540 1628071 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 02:40:11.495719 1628071 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 02:40:11.497426 1628071 cli_runner.go:164] Run: docker container inspect functional-359379 --format={{.State.Status}}
I0730 02:40:11.521731 1628071 ssh_runner.go:195] Run: systemctl --version
I0730 02:40:11.521786 1628071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-359379
I0730 02:40:11.543740 1628071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38893 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/functional-359379/id_rsa Username:docker}
I0730 02:40:11.638226 1628071 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-359379 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800
repoDigests:
- docker.io/kindest/kindnetd@sha256:14100a3a7aca6cad3de3f26ee342ad937ca7d2844b1983d3baa7bf5f491baa7a
- docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a
repoTags:
- docker.io/kindest/kindnetd:v20240719-e7903573
size: "90281007"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests:
- docker.io/library/nginx@sha256:2732a234518030d4fd7a4562515a42d05d93a99faba1c2b07c68e0eeaa9ee65c
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
repoTags:
- docker.io/library/nginx:latest
size: "197104786"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "108229958"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "89199511"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:functional-359379
size: "4788229"
- id: 5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
- docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "90278450"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
- docker.io/library/nginx@sha256:37d07a7f2aef3a0cc9ca4aafd9331c0796e47536c06a1f7304f98d69816baed7
repoTags:
- docker.io/library/nginx:alpine
size: "46671358"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "113538528"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
- registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "61568326"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-359379 image ls --format yaml --alsologtostderr:
I0730 02:40:11.194022 1628009 out.go:291] Setting OutFile to fd 1 ...
I0730 02:40:11.194232 1628009 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 02:40:11.194262 1628009 out.go:304] Setting ErrFile to fd 2...
I0730 02:40:11.194282 1628009 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 02:40:11.194561 1628009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
I0730 02:40:11.195376 1628009 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 02:40:11.195556 1628009 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 02:40:11.196091 1628009 cli_runner.go:164] Run: docker container inspect functional-359379 --format={{.State.Status}}
I0730 02:40:11.216462 1628009 ssh_runner.go:195] Run: systemctl --version
I0730 02:40:11.216516 1628009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-359379
I0730 02:40:11.239697 1628009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38893 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/functional-359379/id_rsa Username:docker}
I0730 02:40:11.333264 1628009 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-359379 ssh pgrep buildkitd: exit status 1 (286.354448ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image build -t localhost/my-image:functional-359379 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-359379 image build -t localhost/my-image:functional-359379 testdata/build --alsologtostderr: (2.136477083s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-359379 image build -t localhost/my-image:functional-359379 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e1035cc8ff9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-359379
--> c8940f13d54
Successfully tagged localhost/my-image:functional-359379
c8940f13d54ed5434d0c408a813f5f04b6b874a98383b39c52a42896bd0bbc7e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-359379 image build -t localhost/my-image:functional-359379 testdata/build --alsologtostderr:
I0730 02:40:12.038517 1628197 out.go:291] Setting OutFile to fd 1 ...
I0730 02:40:12.040325 1628197 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 02:40:12.040347 1628197 out.go:304] Setting ErrFile to fd 2...
I0730 02:40:12.040354 1628197 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 02:40:12.040690 1628197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
I0730 02:40:12.041609 1628197 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 02:40:12.044048 1628197 config.go:182] Loaded profile config "functional-359379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0730 02:40:12.044834 1628197 cli_runner.go:164] Run: docker container inspect functional-359379 --format={{.State.Status}}
I0730 02:40:12.064288 1628197 ssh_runner.go:195] Run: systemctl --version
I0730 02:40:12.064360 1628197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-359379
I0730 02:40:12.082870 1628197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38893 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/functional-359379/id_rsa Username:docker}
I0730 02:40:12.176977 1628197 build_images.go:161] Building image from path: /tmp/build.2800495774.tar
I0730 02:40:12.177054 1628197 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0730 02:40:12.186821 1628197 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2800495774.tar
I0730 02:40:12.191125 1628197 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2800495774.tar: stat -c "%s %y" /var/lib/minikube/build/build.2800495774.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2800495774.tar': No such file or directory
I0730 02:40:12.191154 1628197 ssh_runner.go:362] scp /tmp/build.2800495774.tar --> /var/lib/minikube/build/build.2800495774.tar (3072 bytes)
I0730 02:40:12.218188 1628197 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2800495774
I0730 02:40:12.227438 1628197 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2800495774 -xf /var/lib/minikube/build/build.2800495774.tar
I0730 02:40:12.237140 1628197 crio.go:315] Building image: /var/lib/minikube/build/build.2800495774
I0730 02:40:12.237260 1628197 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-359379 /var/lib/minikube/build/build.2800495774 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0730 02:40:14.089233 1628197 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-359379 /var/lib/minikube/build/build.2800495774 --cgroup-manager=cgroupfs: (1.851943378s)
I0730 02:40:14.089337 1628197 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2800495774
I0730 02:40:14.098405 1628197 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2800495774.tar
I0730 02:40:14.107045 1628197 build_images.go:217] Built localhost/my-image:functional-359379 from /tmp/build.2800495774.tar
I0730 02:40:14.107075 1628197 build_images.go:133] succeeded building to: functional-359379
I0730 02:40:14.107080 1628197 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-359379
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image load --daemon docker.io/kicbase/echo-server:functional-359379 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-359379 image load --daemon docker.io/kicbase/echo-server:functional-359379 --alsologtostderr: (1.292053643s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image load --daemon docker.io/kicbase/echo-server:functional-359379 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-359379
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image load --daemon docker.io/kicbase/echo-server:functional-359379 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image save docker.io/kicbase/echo-server:functional-359379 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image rm docker.io/kicbase/echo-server:functional-359379 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-359379
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-359379 image save --daemon docker.io/kicbase/echo-server:functional-359379 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-359379
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-359379
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-359379
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-359379
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (186.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-642542 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0730 02:40:29.072609 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:41:50.993434 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-642542 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m5.450662138s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (186.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-642542 -- rollout status deployment/busybox: (3.943182122s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-2bv2v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-csrtf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-hjvpk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-2bv2v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-csrtf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-hjvpk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-2bv2v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-csrtf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-hjvpk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-2bv2v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-2bv2v -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-csrtf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-csrtf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-hjvpk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-642542 -- exec busybox-fc5497c4f-hjvpk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-642542 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-642542 -v=7 --alsologtostderr: (34.607766597s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr
E0730 02:44:07.148535 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-642542 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp testdata/cp-test.txt ha-642542:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile801376715/001/cp-test_ha-642542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542:/home/docker/cp-test.txt ha-642542-m02:/home/docker/cp-test_ha-642542_ha-642542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m02 "sudo cat /home/docker/cp-test_ha-642542_ha-642542-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542:/home/docker/cp-test.txt ha-642542-m03:/home/docker/cp-test_ha-642542_ha-642542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m03 "sudo cat /home/docker/cp-test_ha-642542_ha-642542-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542:/home/docker/cp-test.txt ha-642542-m04:/home/docker/cp-test_ha-642542_ha-642542-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m04 "sudo cat /home/docker/cp-test_ha-642542_ha-642542-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp testdata/cp-test.txt ha-642542-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile801376715/001/cp-test_ha-642542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m02:/home/docker/cp-test.txt ha-642542:/home/docker/cp-test_ha-642542-m02_ha-642542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542 "sudo cat /home/docker/cp-test_ha-642542-m02_ha-642542.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m02:/home/docker/cp-test.txt ha-642542-m03:/home/docker/cp-test_ha-642542-m02_ha-642542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m03 "sudo cat /home/docker/cp-test_ha-642542-m02_ha-642542-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m02:/home/docker/cp-test.txt ha-642542-m04:/home/docker/cp-test_ha-642542-m02_ha-642542-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m04 "sudo cat /home/docker/cp-test_ha-642542-m02_ha-642542-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp testdata/cp-test.txt ha-642542-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile801376715/001/cp-test_ha-642542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m03:/home/docker/cp-test.txt ha-642542:/home/docker/cp-test_ha-642542-m03_ha-642542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542 "sudo cat /home/docker/cp-test_ha-642542-m03_ha-642542.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m03:/home/docker/cp-test.txt ha-642542-m02:/home/docker/cp-test_ha-642542-m03_ha-642542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m02 "sudo cat /home/docker/cp-test_ha-642542-m03_ha-642542-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m03:/home/docker/cp-test.txt ha-642542-m04:/home/docker/cp-test_ha-642542-m03_ha-642542-m04.txt
E0730 02:44:21.808733 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
E0730 02:44:21.814380 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
E0730 02:44:21.824597 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
E0730 02:44:21.844811 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
E0730 02:44:21.885085 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m03 "sudo cat /home/docker/cp-test.txt"
E0730 02:44:21.966373 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
E0730 02:44:22.126703 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m04 "sudo cat /home/docker/cp-test_ha-642542-m03_ha-642542-m04.txt"
E0730 02:44:22.447518 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp testdata/cp-test.txt ha-642542-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m04 "sudo cat /home/docker/cp-test.txt"
E0730 02:44:23.088518 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile801376715/001/cp-test_ha-642542-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m04:/home/docker/cp-test.txt ha-642542:/home/docker/cp-test_ha-642542-m04_ha-642542.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m04 "sudo cat /home/docker/cp-test.txt"
E0730 02:44:24.369107 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542 "sudo cat /home/docker/cp-test_ha-642542-m04_ha-642542.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m04:/home/docker/cp-test.txt ha-642542-m02:/home/docker/cp-test_ha-642542-m04_ha-642542-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m02 "sudo cat /home/docker/cp-test_ha-642542-m04_ha-642542-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 cp ha-642542-m04:/home/docker/cp-test.txt ha-642542-m03:/home/docker/cp-test_ha-642542-m04_ha-642542-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 ssh -n ha-642542-m03 "sudo cat /home/docker/cp-test_ha-642542-m04_ha-642542-m03.txt"
E0730 02:44:26.929902 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/CopyFile (18.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 node stop m02 -v=7 --alsologtostderr
E0730 02:44:32.050441 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
E0730 02:44:34.834002 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-642542 node stop m02 -v=7 --alsologtostderr: (11.953084091s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr: exit status 7 (724.156063ms)

                                                
                                                
-- stdout --
	ha-642542
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-642542-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-642542-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-642542-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 02:44:39.004697 1644269 out.go:291] Setting OutFile to fd 1 ...
	I0730 02:44:39.005103 1644269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:44:39.005421 1644269 out.go:304] Setting ErrFile to fd 2...
	I0730 02:44:39.005449 1644269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:44:39.005777 1644269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 02:44:39.006098 1644269 out.go:298] Setting JSON to false
	I0730 02:44:39.006183 1644269 mustload.go:65] Loading cluster: ha-642542
	I0730 02:44:39.006241 1644269 notify.go:220] Checking for updates...
	I0730 02:44:39.007646 1644269 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:44:39.007701 1644269 status.go:255] checking status of ha-642542 ...
	I0730 02:44:39.008479 1644269 cli_runner.go:164] Run: docker container inspect ha-642542 --format={{.State.Status}}
	I0730 02:44:39.026066 1644269 status.go:330] ha-642542 host status = "Running" (err=<nil>)
	I0730 02:44:39.026089 1644269 host.go:66] Checking if "ha-642542" exists ...
	I0730 02:44:39.026515 1644269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542
	I0730 02:44:39.045761 1644269 host.go:66] Checking if "ha-642542" exists ...
	I0730 02:44:39.046218 1644269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 02:44:39.046280 1644269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542
	I0730 02:44:39.072760 1644269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38898 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542/id_rsa Username:docker}
	I0730 02:44:39.169241 1644269 ssh_runner.go:195] Run: systemctl --version
	I0730 02:44:39.174916 1644269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 02:44:39.187616 1644269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 02:44:39.246617 1644269 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-07-30 02:44:39.234368362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 02:44:39.247368 1644269 kubeconfig.go:125] found "ha-642542" server: "https://192.168.49.254:8443"
	I0730 02:44:39.247403 1644269 api_server.go:166] Checking apiserver status ...
	I0730 02:44:39.247448 1644269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 02:44:39.258627 1644269 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup
	I0730 02:44:39.268311 1644269 api_server.go:182] apiserver freezer: "2:freezer:/docker/89e6f2fdaeb97e179b135e3b11063558312a7eeb528ccee51d2941fa5ee86cd3/crio/crio-d54de591fc6f65799b1914446c3cdab04f609d91031f06e0f6fd7c747e3eb0d9"
	I0730 02:44:39.268450 1644269 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/89e6f2fdaeb97e179b135e3b11063558312a7eeb528ccee51d2941fa5ee86cd3/crio/crio-d54de591fc6f65799b1914446c3cdab04f609d91031f06e0f6fd7c747e3eb0d9/freezer.state
	I0730 02:44:39.282091 1644269 api_server.go:204] freezer state: "THAWED"
	I0730 02:44:39.282131 1644269 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0730 02:44:39.290152 1644269 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0730 02:44:39.290196 1644269 status.go:422] ha-642542 apiserver status = Running (err=<nil>)
	I0730 02:44:39.290209 1644269 status.go:257] ha-642542 status: &{Name:ha-642542 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 02:44:39.290226 1644269 status.go:255] checking status of ha-642542-m02 ...
	I0730 02:44:39.290662 1644269 cli_runner.go:164] Run: docker container inspect ha-642542-m02 --format={{.State.Status}}
	I0730 02:44:39.308595 1644269 status.go:330] ha-642542-m02 host status = "Stopped" (err=<nil>)
	I0730 02:44:39.308620 1644269 status.go:343] host is not running, skipping remaining checks
	I0730 02:44:39.308629 1644269 status.go:257] ha-642542-m02 status: &{Name:ha-642542-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 02:44:39.308650 1644269 status.go:255] checking status of ha-642542-m03 ...
	I0730 02:44:39.308979 1644269 cli_runner.go:164] Run: docker container inspect ha-642542-m03 --format={{.State.Status}}
	I0730 02:44:39.329186 1644269 status.go:330] ha-642542-m03 host status = "Running" (err=<nil>)
	I0730 02:44:39.329213 1644269 host.go:66] Checking if "ha-642542-m03" exists ...
	I0730 02:44:39.329690 1644269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542-m03
	I0730 02:44:39.347681 1644269 host.go:66] Checking if "ha-642542-m03" exists ...
	I0730 02:44:39.348037 1644269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 02:44:39.348088 1644269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m03
	I0730 02:44:39.365655 1644269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38908 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m03/id_rsa Username:docker}
	I0730 02:44:39.457441 1644269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 02:44:39.470606 1644269 kubeconfig.go:125] found "ha-642542" server: "https://192.168.49.254:8443"
	I0730 02:44:39.470640 1644269 api_server.go:166] Checking apiserver status ...
	I0730 02:44:39.470720 1644269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 02:44:39.482384 1644269 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup
	I0730 02:44:39.492502 1644269 api_server.go:182] apiserver freezer: "2:freezer:/docker/dd7db2e145c053814d94a6e4a8b5f65b83fc6f1ebc873a1e8b3c561f17f5c203/crio/crio-f0fc89c6a1c2aa2bcca528eca1bca65c546f0ba52ed5244bc1507f5ec04cef3b"
	I0730 02:44:39.492617 1644269 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dd7db2e145c053814d94a6e4a8b5f65b83fc6f1ebc873a1e8b3c561f17f5c203/crio/crio-f0fc89c6a1c2aa2bcca528eca1bca65c546f0ba52ed5244bc1507f5ec04cef3b/freezer.state
	I0730 02:44:39.500987 1644269 api_server.go:204] freezer state: "THAWED"
	I0730 02:44:39.501022 1644269 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0730 02:44:39.508872 1644269 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0730 02:44:39.508900 1644269 status.go:422] ha-642542-m03 apiserver status = Running (err=<nil>)
	I0730 02:44:39.508911 1644269 status.go:257] ha-642542-m03 status: &{Name:ha-642542-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 02:44:39.508932 1644269 status.go:255] checking status of ha-642542-m04 ...
	I0730 02:44:39.509305 1644269 cli_runner.go:164] Run: docker container inspect ha-642542-m04 --format={{.State.Status}}
	I0730 02:44:39.527210 1644269 status.go:330] ha-642542-m04 host status = "Running" (err=<nil>)
	I0730 02:44:39.527236 1644269 host.go:66] Checking if "ha-642542-m04" exists ...
	I0730 02:44:39.527534 1644269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-642542-m04
	I0730 02:44:39.551278 1644269 host.go:66] Checking if "ha-642542-m04" exists ...
	I0730 02:44:39.551602 1644269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 02:44:39.551647 1644269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-642542-m04
	I0730 02:44:39.569094 1644269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38913 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/ha-642542-m04/id_rsa Username:docker}
	I0730 02:44:39.660945 1644269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 02:44:39.676618 1644269 status.go:257] ha-642542-m04 status: &{Name:ha-642542-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 node start m02 -v=7 --alsologtostderr
E0730 02:44:42.290635 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-642542 node start m02 -v=7 --alsologtostderr: (21.580684827s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr
E0730 02:45:02.771198 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr: (1.511092058s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (6.24220589s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (191.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-642542 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-642542 -v=7 --alsologtostderr
E0730 02:45:43.732123 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-642542 -v=7 --alsologtostderr: (36.997809303s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-642542 --wait=true -v=7 --alsologtostderr
E0730 02:47:05.652352 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-642542 --wait=true -v=7 --alsologtostderr: (2m34.215412782s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-642542
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (191.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-642542 node delete m03 -v=7 --alsologtostderr: (11.262202856s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 stop -v=7 --alsologtostderr
E0730 02:49:07.149374 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-642542 stop -v=7 --alsologtostderr: (35.556747102s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr: exit status 7 (106.143997ms)

                                                
                                                
-- stdout --
	ha-642542
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-642542-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-642542-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 02:49:09.399854 1658842 out.go:291] Setting OutFile to fd 1 ...
	I0730 02:49:09.400101 1658842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:49:09.400131 1658842 out.go:304] Setting ErrFile to fd 2...
	I0730 02:49:09.400151 1658842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 02:49:09.400412 1658842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 02:49:09.400673 1658842 out.go:298] Setting JSON to false
	I0730 02:49:09.400740 1658842 mustload.go:65] Loading cluster: ha-642542
	I0730 02:49:09.400796 1658842 notify.go:220] Checking for updates...
	I0730 02:49:09.401252 1658842 config.go:182] Loaded profile config "ha-642542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 02:49:09.401293 1658842 status.go:255] checking status of ha-642542 ...
	I0730 02:49:09.401828 1658842 cli_runner.go:164] Run: docker container inspect ha-642542 --format={{.State.Status}}
	I0730 02:49:09.420880 1658842 status.go:330] ha-642542 host status = "Stopped" (err=<nil>)
	I0730 02:49:09.420907 1658842 status.go:343] host is not running, skipping remaining checks
	I0730 02:49:09.420930 1658842 status.go:257] ha-642542 status: &{Name:ha-642542 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 02:49:09.420966 1658842 status.go:255] checking status of ha-642542-m02 ...
	I0730 02:49:09.421314 1658842 cli_runner.go:164] Run: docker container inspect ha-642542-m02 --format={{.State.Status}}
	I0730 02:49:09.437843 1658842 status.go:330] ha-642542-m02 host status = "Stopped" (err=<nil>)
	I0730 02:49:09.437867 1658842 status.go:343] host is not running, skipping remaining checks
	I0730 02:49:09.437875 1658842 status.go:257] ha-642542-m02 status: &{Name:ha-642542-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 02:49:09.437906 1658842 status.go:255] checking status of ha-642542-m04 ...
	I0730 02:49:09.438206 1658842 cli_runner.go:164] Run: docker container inspect ha-642542-m04 --format={{.State.Status}}
	I0730 02:49:09.459155 1658842 status.go:330] ha-642542-m04 host status = "Stopped" (err=<nil>)
	I0730 02:49:09.459175 1658842 status.go:343] host is not running, skipping remaining checks
	I0730 02:49:09.459183 1658842 status.go:257] ha-642542-m04 status: &{Name:ha-642542-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-642542 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-642542 --control-plane -v=7 --alsologtostderr: (1m13.08468219s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-642542 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-851903 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-851903 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (59.124994093s)
--- PASS: TestJSONOutput/start/Command (59.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-851903 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-851903 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-851903 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-851903 --output=json --user=testUser: (5.834056411s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-235297 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-235297 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.705536ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"83d7d058-ea26-413d-ac17-e882dd200662","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-235297] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d804cb3-5bd2-4f06-aad8-db77caa62528","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19348"}}
	{"specversion":"1.0","id":"f941f0a5-5ec0-47aa-a9a4-f337317518e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0566d806-4e24-4cd5-b0c5-94d40437c139","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig"}}
	{"specversion":"1.0","id":"b3adc0fb-2752-4ed8-86c4-86ea4992fb7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube"}}
	{"specversion":"1.0","id":"34e4a977-fbbd-4e3d-b1b5-788b584d9e37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3a245b18-f669-4f87-9039-75d2f07a947b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"20513fe8-0c36-4347-b388-8da4a156eb80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-235297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-235297
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-888352 --network=
E0730 02:54:07.149167 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:54:21.809075 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-888352 --network=: (36.000918552s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-888352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-888352
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-888352: (2.088747455s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.11s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-759893 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-759893 --network=bridge: (31.731190023s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-759893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-759893
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-759893: (2.017283527s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.78s)

                                                
                                    
x
+
TestKicExistingNetwork (33.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-051152 --network=existing-network
E0730 02:55:30.194214 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-051152 --network=existing-network: (31.31797187s)
helpers_test.go:175: Cleaning up "existing-network-051152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-051152
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-051152: (1.950749092s)
--- PASS: TestKicExistingNetwork (33.43s)

                                                
                                    
x
+
TestKicCustomSubnet (36.44s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-421033 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-421033 --subnet=192.168.60.0/24: (34.358948436s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-421033 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-421033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-421033
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-421033: (2.062247037s)
--- PASS: TestKicCustomSubnet (36.44s)

                                                
                                    
x
+
TestKicStaticIP (33.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-198355 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-198355 --static-ip=192.168.200.200: (31.399759119s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-198355 ip
helpers_test.go:175: Cleaning up "static-ip-198355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-198355
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-198355: (2.132859022s)
--- PASS: TestKicStaticIP (33.69s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-809001 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-809001 --driver=docker  --container-runtime=crio: (29.420677394s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-811848 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-811848 --driver=docker  --container-runtime=crio: (33.964239939s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-809001
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-811848
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-811848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-811848
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-811848: (1.961993073s)
helpers_test.go:175: Cleaning up "first-809001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-809001
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-809001: (2.262568879s)
--- PASS: TestMinikubeProfile (68.86s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-596058 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-596058 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.281321565s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-596058 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-609095 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-609095 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.028613227s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-609095 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-596058 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-596058 --alsologtostderr -v=5: (1.597533739s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-609095 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-609095
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-609095: (1.192030741s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-609095
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-609095: (6.827484102s)
--- PASS: TestMountStart/serial/RestartStopped (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-609095 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-652431 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0730 02:59:07.148965 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 02:59:21.809354 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-652431 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m26.435717056s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-652431 -- rollout status deployment/busybox: (3.147135254s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- exec busybox-fc5497c4f-5j2k8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- exec busybox-fc5497c4f-pgpnd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- exec busybox-fc5497c4f-5j2k8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- exec busybox-fc5497c4f-pgpnd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- exec busybox-fc5497c4f-5j2k8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- exec busybox-fc5497c4f-pgpnd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- exec busybox-fc5497c4f-5j2k8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- exec busybox-fc5497c4f-5j2k8 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- exec busybox-fc5497c4f-pgpnd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-652431 -- exec busybox-fc5497c4f-pgpnd -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.16s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-652431 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-652431 -v 3 --alsologtostderr: (30.730393889s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-652431 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp testdata/cp-test.txt multinode-652431:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp multinode-652431:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2277064189/001/cp-test_multinode-652431.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp multinode-652431:/home/docker/cp-test.txt multinode-652431-m02:/home/docker/cp-test_multinode-652431_multinode-652431-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m02 "sudo cat /home/docker/cp-test_multinode-652431_multinode-652431-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp multinode-652431:/home/docker/cp-test.txt multinode-652431-m03:/home/docker/cp-test_multinode-652431_multinode-652431-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m03 "sudo cat /home/docker/cp-test_multinode-652431_multinode-652431-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp testdata/cp-test.txt multinode-652431-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp multinode-652431-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2277064189/001/cp-test_multinode-652431-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp multinode-652431-m02:/home/docker/cp-test.txt multinode-652431:/home/docker/cp-test_multinode-652431-m02_multinode-652431.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431 "sudo cat /home/docker/cp-test_multinode-652431-m02_multinode-652431.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp multinode-652431-m02:/home/docker/cp-test.txt multinode-652431-m03:/home/docker/cp-test_multinode-652431-m02_multinode-652431-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m03 "sudo cat /home/docker/cp-test_multinode-652431-m02_multinode-652431-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp testdata/cp-test.txt multinode-652431-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp multinode-652431-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2277064189/001/cp-test_multinode-652431-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp multinode-652431-m03:/home/docker/cp-test.txt multinode-652431:/home/docker/cp-test_multinode-652431-m03_multinode-652431.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431 "sudo cat /home/docker/cp-test_multinode-652431-m03_multinode-652431.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 cp multinode-652431-m03:/home/docker/cp-test.txt multinode-652431-m02:/home/docker/cp-test_multinode-652431-m03_multinode-652431-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 ssh -n multinode-652431-m02 "sudo cat /home/docker/cp-test_multinode-652431-m03_multinode-652431-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-652431 node stop m03: (1.224298196s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-652431 status: exit status 7 (511.828976ms)

                                                
                                                
-- stdout --
	multinode-652431
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-652431-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-652431-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-652431 status --alsologtostderr: exit status 7 (539.091759ms)

                                                
                                                
-- stdout --
	multinode-652431
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-652431-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-652431-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 03:00:43.979318 1714026 out.go:291] Setting OutFile to fd 1 ...
	I0730 03:00:43.979566 1714026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 03:00:43.979594 1714026 out.go:304] Setting ErrFile to fd 2...
	I0730 03:00:43.979614 1714026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 03:00:43.979941 1714026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 03:00:43.980201 1714026 out.go:298] Setting JSON to false
	I0730 03:00:43.980273 1714026 mustload.go:65] Loading cluster: multinode-652431
	I0730 03:00:43.980356 1714026 notify.go:220] Checking for updates...
	I0730 03:00:43.981563 1714026 config.go:182] Loaded profile config "multinode-652431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 03:00:43.981615 1714026 status.go:255] checking status of multinode-652431 ...
	I0730 03:00:43.982243 1714026 cli_runner.go:164] Run: docker container inspect multinode-652431 --format={{.State.Status}}
	I0730 03:00:44.005163 1714026 status.go:330] multinode-652431 host status = "Running" (err=<nil>)
	I0730 03:00:44.005195 1714026 host.go:66] Checking if "multinode-652431" exists ...
	I0730 03:00:44.005520 1714026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-652431
	I0730 03:00:44.040087 1714026 host.go:66] Checking if "multinode-652431" exists ...
	I0730 03:00:44.040406 1714026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 03:00:44.040461 1714026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-652431
	I0730 03:00:44.063765 1714026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39019 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/multinode-652431/id_rsa Username:docker}
	I0730 03:00:44.161190 1714026 ssh_runner.go:195] Run: systemctl --version
	I0730 03:00:44.165608 1714026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 03:00:44.176968 1714026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 03:00:44.247025 1714026 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-07-30 03:00:44.237745842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 03:00:44.247609 1714026 kubeconfig.go:125] found "multinode-652431" server: "https://192.168.58.2:8443"
	I0730 03:00:44.247642 1714026 api_server.go:166] Checking apiserver status ...
	I0730 03:00:44.247693 1714026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0730 03:00:44.258506 1714026 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1453/cgroup
	I0730 03:00:44.268336 1714026 api_server.go:182] apiserver freezer: "2:freezer:/docker/2b82698a0fa0e5ff6a5ab24ba1d7d6f668906d83eb42b34aba7300cde0f2dbef/crio/crio-1230c74a84e223d2dc811d450950b4bd22b2c6b1694bc2eb64402019a616a9f0"
	I0730 03:00:44.268418 1714026 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2b82698a0fa0e5ff6a5ab24ba1d7d6f668906d83eb42b34aba7300cde0f2dbef/crio/crio-1230c74a84e223d2dc811d450950b4bd22b2c6b1694bc2eb64402019a616a9f0/freezer.state
	I0730 03:00:44.277263 1714026 api_server.go:204] freezer state: "THAWED"
	I0730 03:00:44.277303 1714026 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0730 03:00:44.284839 1714026 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0730 03:00:44.284870 1714026 status.go:422] multinode-652431 apiserver status = Running (err=<nil>)
	I0730 03:00:44.284889 1714026 status.go:257] multinode-652431 status: &{Name:multinode-652431 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 03:00:44.284912 1714026 status.go:255] checking status of multinode-652431-m02 ...
	I0730 03:00:44.285223 1714026 cli_runner.go:164] Run: docker container inspect multinode-652431-m02 --format={{.State.Status}}
	I0730 03:00:44.303395 1714026 status.go:330] multinode-652431-m02 host status = "Running" (err=<nil>)
	I0730 03:00:44.303422 1714026 host.go:66] Checking if "multinode-652431-m02" exists ...
	I0730 03:00:44.303725 1714026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-652431-m02
	I0730 03:00:44.320856 1714026 host.go:66] Checking if "multinode-652431-m02" exists ...
	I0730 03:00:44.321162 1714026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0730 03:00:44.321210 1714026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-652431-m02
	I0730 03:00:44.339753 1714026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39024 SSHKeyPath:/home/jenkins/minikube-integration/19348-1592571/.minikube/machines/multinode-652431-m02/id_rsa Username:docker}
	I0730 03:00:44.429574 1714026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0730 03:00:44.441274 1714026 status.go:257] multinode-652431-m02 status: &{Name:multinode-652431-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0730 03:00:44.441313 1714026 status.go:255] checking status of multinode-652431-m03 ...
	I0730 03:00:44.441633 1714026 cli_runner.go:164] Run: docker container inspect multinode-652431-m03 --format={{.State.Status}}
	I0730 03:00:44.459062 1714026 status.go:330] multinode-652431-m03 host status = "Stopped" (err=<nil>)
	I0730 03:00:44.459089 1714026 status.go:343] host is not running, skipping remaining checks
	I0730 03:00:44.459098 1714026 status.go:257] multinode-652431-m03 status: &{Name:multinode-652431-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 node start m03 -v=7 --alsologtostderr
E0730 03:00:44.854201 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-652431 node start m03 -v=7 --alsologtostderr: (9.230043302s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-652431
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-652431
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-652431: (24.780151085s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-652431 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-652431 --wait=true -v=8 --alsologtostderr: (1m1.67753932s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-652431
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-652431 node delete m03: (4.656715702s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-652431 stop: (23.654975902s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-652431 status: exit status 7 (91.013127ms)

                                                
                                                
-- stdout --
	multinode-652431
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-652431-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-652431 status --alsologtostderr: exit status 7 (91.398455ms)

                                                
                                                
-- stdout --
	multinode-652431
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-652431-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 03:02:50.183046 1721512 out.go:291] Setting OutFile to fd 1 ...
	I0730 03:02:50.183156 1721512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 03:02:50.183167 1721512 out.go:304] Setting ErrFile to fd 2...
	I0730 03:02:50.183173 1721512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 03:02:50.183419 1721512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 03:02:50.183585 1721512 out.go:298] Setting JSON to false
	I0730 03:02:50.183621 1721512 mustload.go:65] Loading cluster: multinode-652431
	I0730 03:02:50.183723 1721512 notify.go:220] Checking for updates...
	I0730 03:02:50.184066 1721512 config.go:182] Loaded profile config "multinode-652431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 03:02:50.184078 1721512 status.go:255] checking status of multinode-652431 ...
	I0730 03:02:50.184568 1721512 cli_runner.go:164] Run: docker container inspect multinode-652431 --format={{.State.Status}}
	I0730 03:02:50.202822 1721512 status.go:330] multinode-652431 host status = "Stopped" (err=<nil>)
	I0730 03:02:50.202881 1721512 status.go:343] host is not running, skipping remaining checks
	I0730 03:02:50.202912 1721512 status.go:257] multinode-652431 status: &{Name:multinode-652431 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0730 03:02:50.202940 1721512 status.go:255] checking status of multinode-652431-m02 ...
	I0730 03:02:50.203250 1721512 cli_runner.go:164] Run: docker container inspect multinode-652431-m02 --format={{.State.Status}}
	I0730 03:02:50.227025 1721512 status.go:330] multinode-652431-m02 host status = "Stopped" (err=<nil>)
	I0730 03:02:50.227046 1721512 status.go:343] host is not running, skipping remaining checks
	I0730 03:02:50.227054 1721512 status.go:257] multinode-652431-m02 status: &{Name:multinode-652431-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-652431 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-652431 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (55.207451414s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-652431 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-652431
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-652431-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-652431-m02 --driver=docker  --container-runtime=crio: exit status 14 (72.517443ms)

                                                
                                                
-- stdout --
	* [multinode-652431-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-652431-m02' is duplicated with machine name 'multinode-652431-m02' in profile 'multinode-652431'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-652431-m03 --driver=docker  --container-runtime=crio
E0730 03:04:07.149183 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-652431-m03 --driver=docker  --container-runtime=crio: (29.881505635s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-652431
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-652431: exit status 80 (486.407859ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-652431 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-652431-m03 already exists in multinode-652431-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-652431-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-652431-m03: (1.945432538s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.43s)

                                                
                                    
x
+
TestPreload (124.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-598267 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-598267 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m32.006392034s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-598267 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-598267 image pull gcr.io/k8s-minikube/busybox: (1.899854154s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-598267
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-598267: (5.79162661s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-598267 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-598267 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (22.000064614s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-598267 image list
helpers_test.go:175: Cleaning up "test-preload-598267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-598267
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-598267: (2.456880867s)
--- PASS: TestPreload (124.50s)

                                                
                                    
x
+
TestScheduledStopUnix (109.34s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-654717 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-654717 --memory=2048 --driver=docker  --container-runtime=crio: (32.538132106s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-654717 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-654717 -n scheduled-stop-654717
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-654717 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-654717 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-654717 -n scheduled-stop-654717
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-654717
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-654717 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-654717
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-654717: exit status 7 (65.566137ms)

                                                
                                                
-- stdout --
	scheduled-stop-654717
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-654717 -n scheduled-stop-654717
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-654717 -n scheduled-stop-654717: exit status 7 (69.975915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-654717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-654717
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-654717: (5.08020128s)
--- PASS: TestScheduledStopUnix (109.34s)

                                                
                                    
x
+
TestInsufficientStorage (10.77s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-126070 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-126070 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.310745865s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"04f075e0-c041-4886-84d1-2ad6df948579","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-126070] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1dfdbcc1-1846-4e24-9dee-be3e4f8825c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19348"}}
	{"specversion":"1.0","id":"53b2330a-54b0-4ee9-bb3c-d23c362ba620","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"745ac35a-5b57-4bc8-9c03-3754527eb5f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig"}}
	{"specversion":"1.0","id":"903bd308-9a71-43d8-9e00-f6caccdcb488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube"}}
	{"specversion":"1.0","id":"3110f9f7-7c9d-4c63-a613-83d54ba8076f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"29c4d328-b1cb-455e-9848-d744cc6c97cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3f368642-9c76-41b9-ab55-584acd85ab71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"01289b40-fb84-467e-a264-7e43e392d410","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d216e31f-c131-4e13-8935-e9f585c1b3c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"161888e4-5b63-4a22-8dfe-a48c58aa8d20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4dc40d0a-4c58-4aea-9cc0-9ae16e9c24f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-126070\" primary control-plane node in \"insufficient-storage-126070\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4c223e3-0a33-4e86-98e0-12addf1ab292","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f0253e2-e135-44cb-aad2-a4f7c4e29cf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b33acc7-7868-46a7-8f98-e4630927720d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-126070 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-126070 --output=json --layout=cluster: exit status 7 (282.730843ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-126070","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-126070","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0730 03:08:24.946815 1739213 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-126070" does not appear in /home/jenkins/minikube-integration/19348-1592571/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-126070 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-126070 --output=json --layout=cluster: exit status 7 (282.679595ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-126070","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-126070","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0730 03:08:25.229779 1739276 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-126070" does not appear in /home/jenkins/minikube-integration/19348-1592571/kubeconfig
	E0730 03:08:25.239946 1739276 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/insufficient-storage-126070/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-126070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-126070
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-126070: (1.892923319s)
--- PASS: TestInsufficientStorage (10.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (80.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.963604056 start -p running-upgrade-865268 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.963604056 start -p running-upgrade-865268 --memory=2200 --vm-driver=docker  --container-runtime=crio: (48.66193253s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-865268 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-865268 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.891939747s)
helpers_test.go:175: Cleaning up "running-upgrade-865268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-865268
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-865268: (4.19301433s)
--- PASS: TestRunningBinaryUpgrade (80.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (393.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-178340 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-178340 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m15.312239449s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-178340
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-178340: (1.748335124s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-178340 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-178340 status --format={{.Host}}: exit status 7 (96.664467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-178340 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-178340 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.595883087s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-178340 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-178340 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-178340 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (85.20804ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-178340] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-178340
	    minikube start -p kubernetes-upgrade-178340 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1783402 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-178340 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-178340 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-178340 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.862657361s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-178340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-178340
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-178340: (2.475353881s)
--- PASS: TestKubernetesUpgrade (393.28s)

                                                
                                    
x
+
TestMissingContainerUpgrade (101.32s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.41207352 start -p missing-upgrade-909942 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.41207352 start -p missing-upgrade-909942 --memory=2200 --driver=docker  --container-runtime=crio: (35.394319686s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-909942
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-909942: (1.69056035s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-909942
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-909942 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0730 03:14:07.150305 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 03:14:21.808450 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-909942 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.742220551s)
helpers_test.go:175: Cleaning up "missing-upgrade-909942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-909942
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-909942: (4.7554263s)
--- PASS: TestMissingContainerUpgrade (101.32s)

                                                
                                    
x
+
TestPause/serial/Start (67.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-436007 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-436007 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m7.722401214s)
--- PASS: TestPause/serial/Start (67.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670340 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-670340 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (92.62935ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-670340] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670340 --driver=docker  --container-runtime=crio
E0730 03:09:07.148520 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670340 --driver=docker  --container-runtime=crio: (42.637695616s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-670340 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670340 --no-kubernetes --driver=docker  --container-runtime=crio
E0730 03:09:21.809213 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670340 --no-kubernetes --driver=docker  --container-runtime=crio: (15.203571964s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-670340 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-670340 status -o json: exit status 2 (305.852959ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-670340","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-670340
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-670340: (1.993439442s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670340 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670340 --no-kubernetes --driver=docker  --container-runtime=crio: (6.495679497s)
--- PASS: TestNoKubernetes/serial/Start (6.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-670340 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-670340 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.900647ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.93s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-436007 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-436007 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.904431188s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-670340
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-670340: (1.264531007s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670340 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670340 --driver=docker  --container-runtime=crio: (7.69638068s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-670340 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-670340 "sudo systemctl is-active --quiet service kubelet": exit status 1 (329.582884ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-488354 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-488354 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (226.426242ms)

                                                
                                                
-- stdout --
	* [false-488354] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0730 03:09:52.449762 1750879 out.go:291] Setting OutFile to fd 1 ...
	I0730 03:09:52.449974 1750879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 03:09:52.450000 1750879 out.go:304] Setting ErrFile to fd 2...
	I0730 03:09:52.450019 1750879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0730 03:09:52.450277 1750879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19348-1592571/.minikube/bin
	I0730 03:09:52.450712 1750879 out.go:298] Setting JSON to false
	I0730 03:09:52.451731 1750879 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":89538,"bootTime":1722219454,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0730 03:09:52.451828 1750879 start.go:139] virtualization:  
	I0730 03:09:52.455468 1750879 out.go:177] * [false-488354] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0730 03:09:52.457819 1750879 out.go:177]   - MINIKUBE_LOCATION=19348
	I0730 03:09:52.457880 1750879 notify.go:220] Checking for updates...
	I0730 03:09:52.463166 1750879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0730 03:09:52.465109 1750879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19348-1592571/kubeconfig
	I0730 03:09:52.467005 1750879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19348-1592571/.minikube
	I0730 03:09:52.469395 1750879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0730 03:09:52.471840 1750879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0730 03:09:52.474631 1750879 config.go:182] Loaded profile config "pause-436007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0730 03:09:52.474729 1750879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0730 03:09:52.502923 1750879 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0730 03:09:52.503031 1750879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0730 03:09:52.595297 1750879 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-30 03:09:52.585309607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0730 03:09:52.595410 1750879 docker.go:307] overlay module found
	I0730 03:09:52.598223 1750879 out.go:177] * Using the docker driver based on user configuration
	I0730 03:09:52.600662 1750879 start.go:297] selected driver: docker
	I0730 03:09:52.600678 1750879 start.go:901] validating driver "docker" against <nil>
	I0730 03:09:52.600691 1750879 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0730 03:09:52.603498 1750879 out.go:177] 
	W0730 03:09:52.606008 1750879 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0730 03:09:52.608754 1750879 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-488354 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-488354" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 30 Jul 2024 03:09:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-436007
contexts:
- context:
cluster: pause-436007
extensions:
- extension:
last-update: Tue, 30 Jul 2024 03:09:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-436007
name: pause-436007
current-context: pause-436007
kind: Config
preferences: {}
users:
- name: pause-436007
user:
client-certificate: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/pause-436007/client.crt
client-key: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/pause-436007/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-488354

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-488354"

                                                
                                                
----------------------- debugLogs end: false-488354 [took: 3.202953692s] --------------------------------
helpers_test.go:175: Cleaning up "false-488354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-488354
--- PASS: TestNetworkPlugins/group/false (3.57s)

                                                
                                    
x
+
TestPause/serial/Pause (1.37s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-436007 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-436007 --alsologtostderr -v=5: (1.367121872s)
--- PASS: TestPause/serial/Pause (1.37s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-436007 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-436007 --output=json --layout=cluster: exit status 2 (412.899501ms)

                                                
                                                
-- stdout --
	{"Name":"pause-436007","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-436007","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-436007 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-436007 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-436007 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-436007 --alsologtostderr -v=5: (2.764525438s)
--- PASS: TestPause/serial/DeletePaused (2.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-436007
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-436007: exit status 1 (20.818196ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-436007: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (95.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1459902852 start -p stopped-upgrade-759656 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0730 03:12:10.195274 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1459902852 start -p stopped-upgrade-759656 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m3.570002047s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1459902852 -p stopped-upgrade-759656 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1459902852 -p stopped-upgrade-759656 stop: (2.804069833s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-759656 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-759656 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.74600027s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (95.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-759656
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m1.618101702s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-488354 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-488354 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mtxdg" [339ed166-d2ec-4e1a-bf6d-c7e734e8a1a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0730 03:17:24.855207 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-mtxdg" [339ed166-d2ec-4e1a-bf6d-c7e734e8a1a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004004905s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-488354 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m1.38795181s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nq7vb" [800d4592-03db-4c7f-89af-dc6ffa194f91] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004543062s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-488354 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-488354 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7vkpv" [f319fb14-b76e-4d8c-b7a8-ee9a40cb370c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0730 03:19:07.148872 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-7vkpv" [f319fb14-b76e-4d8c-b7a8-ee9a40cb370c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004503494s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-488354 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m10.530980132s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hhcrh" [b1a9900d-c167-4643-bdf9-27209fc41d8a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006050972s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-488354 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-488354 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fwmkv" [f599986c-c6c8-4493-b9fe-e6431b0b1b4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fwmkv" [f599986c-c6c8-4493-b9fe-e6431b0b1b4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003919726s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-488354 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (77.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m17.622486746s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (77.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0730 03:22:22.855944 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:22.861247 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:22.871529 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:22.891777 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:22.932098 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:23.012381 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:23.172533 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:23.493068 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:24.133721 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:25.413955 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:27.975075 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:22:33.095380 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m21.229080283s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-488354 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-488354 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-58bk4" [3935af2f-d605-46a1-9e67-382f6cbbdb74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0730 03:22:43.335978 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-58bk4" [3935af2f-d605-46a1-9e67-382f6cbbdb74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004681976s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-488354 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-488354 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-488354 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jj6k4" [b1544c81-3881-4048-984f-1ff8e2c530df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jj6k4" [b1544c81-3881-4048-984f-1ff8e2c530df] Running
E0730 03:23:03.816666 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00718005s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-488354 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m9.155957188s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0730 03:23:44.777162 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:23:59.380136 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:23:59.385338 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:23:59.395537 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:23:59.415766 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:23:59.456028 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:23:59.537055 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:23:59.697207 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:24:00.017989 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:24:00.658409 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:24:01.938941 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:24:04.499386 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:24:07.148867 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 03:24:09.620060 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:24:19.860492 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:24:21.809330 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-488354 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m29.636166072s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-g5brf" [8d94c6c7-a4b8-4b09-bec7-1521e079b4c8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003877756s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-488354 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-488354 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nhvx6" [137c4d51-320d-4b05-9c0d-92c2f767035c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nhvx6" [137c4d51-320d-4b05-9c0d-92c2f767035c] Running
E0730 03:24:40.340806 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003738076s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-488354 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-488354 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-488354 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k5dzp" [df0ca91a-e915-40f1-b2d7-19e4536df8e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-k5dzp" [df0ca91a-e915-40f1-b2d7-19e4536df8e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003419306s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (174.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-842024 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0730 03:25:06.697390 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-842024 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m54.764139909s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (174.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-488354 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-488354 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-766543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0730 03:25:48.682318 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:48.687572 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:48.697915 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:48.718166 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:48.758413 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:48.838656 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:48.998986 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:49.319507 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:49.960513 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:51.240704 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:53.800928 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:25:58.921245 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:26:09.161441 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:26:29.641738 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:26:43.221211 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-766543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m9.372010712s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-766543 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d92d53a9-7a76-4a97-99be-571aab42a845] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d92d53a9-7a76-4a97-99be-571aab42a845] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004497445s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-766543 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-766543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-766543 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-766543 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-766543 --alsologtostderr -v=3: (12.014822411s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-766543 -n no-preload-766543
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-766543 -n no-preload-766543: exit status 7 (74.144216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-766543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-766543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0730 03:27:10.601959 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:27:22.854774 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:27:42.013443 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:42.018843 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:42.029079 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:42.049397 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:42.089783 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:42.170192 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:42.330598 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:42.650812 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:43.291477 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:44.571792 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:47.132529 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:50.537606 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:27:52.252944 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:27:54.760042 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:27:54.765276 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:27:54.775488 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:27:54.795743 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:27:54.836088 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:27:54.916468 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:27:55.076705 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:27:55.397239 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:27:56.038111 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:27:57.318915 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-766543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (4m26.515404104s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-766543 -n no-preload-766543
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-842024 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d140f443-98a6-4afc-8c1c-676bb3be23cc] Pending
E0730 03:27:59.880047 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d140f443-98a6-4afc-8c1c-676bb3be23cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0730 03:28:02.493767 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d140f443-98a6-4afc-8c1c-676bb3be23cc] Running
E0730 03:28:05.000287 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004484556s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-842024 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-842024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-842024 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-842024 --alsologtostderr -v=3
E0730 03:28:15.240635 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-842024 --alsologtostderr -v=3: (12.083975561s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842024 -n old-k8s-version-842024
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842024 -n old-k8s-version-842024: exit status 7 (66.537203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-842024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (130.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-842024 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0730 03:28:22.974692 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:28:32.522503 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:28:35.721725 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:28:50.195504 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 03:28:59.380182 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:29:03.935385 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:29:07.149164 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 03:29:16.682931 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:29:21.809095 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
E0730 03:29:24.775725 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:24.781061 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:24.791327 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:24.811641 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:24.851901 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:24.932235 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:25.092655 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:25.413229 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:26.053506 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:27.061865 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:29:27.334371 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:29.894916 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:35.018084 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:29:45.258293 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:30:01.249957 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:01.255329 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:01.265615 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:01.285982 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:01.326280 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:01.406651 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:01.567165 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:01.887541 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:02.527816 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:03.808092 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:05.738613 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:30:06.369154 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:11.489377 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:21.730232 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:30:25.855624 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-842024 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m10.370704372s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842024 -n old-k8s-version-842024
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (130.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-42btz" [85a90fac-5eae-469b-b3c5-baeefb4d6ac3] Running
E0730 03:30:38.603394 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00386893s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-42btz" [85a90fac-5eae-469b-b3c5-baeefb4d6ac3] Running
E0730 03:30:42.210681 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003652956s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-842024 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-842024 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-842024 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-842024 -n old-k8s-version-842024
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-842024 -n old-k8s-version-842024: exit status 2 (324.692965ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-842024 -n old-k8s-version-842024
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-842024 -n old-k8s-version-842024: exit status 2 (373.082455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-842024 --alsologtostderr -v=1
E0730 03:30:46.699139 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-842024 -n old-k8s-version-842024
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-842024 -n old-k8s-version-842024
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-885909 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0730 03:31:16.362735 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:31:23.171691 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-885909 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (1m1.625901923s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-hxb24" [f9da9f88-56bb-46db-a18f-4eb0c1619a70] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006774138s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-hxb24" [f9da9f88-56bb-46db-a18f-4eb0c1619a70] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004359914s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-766543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-766543 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-766543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-766543 -n no-preload-766543
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-766543 -n no-preload-766543: exit status 2 (329.404703ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-766543 -n no-preload-766543
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-766543 -n no-preload-766543: exit status 2 (329.722776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-766543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-766543 -n no-preload-766543
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-766543 -n no-preload-766543
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-211601 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-211601 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (1m3.059865932s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-885909 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f2979674-31ed-41dc-8315-6ec361a0d2a6] Pending
helpers_test.go:344: "busybox" [f2979674-31ed-41dc-8315-6ec361a0d2a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f2979674-31ed-41dc-8315-6ec361a0d2a6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.0045189s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-885909 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-885909 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-885909 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.252702151s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-885909 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-885909 --alsologtostderr -v=3
E0730 03:32:08.619367 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-885909 --alsologtostderr -v=3: (12.166779034s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-885909 -n embed-certs-885909
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-885909 -n embed-certs-885909: exit status 7 (70.911876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-885909 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (279.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-885909 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0730 03:32:22.854504 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:32:42.015274 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:32:45.092119 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-885909 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m39.309023167s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-885909 -n embed-certs-885909
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (279.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-211601 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af84ff1a-6e81-4b50-af3e-1b008c8628be] Pending
E0730 03:32:54.759698 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
helpers_test.go:344: "busybox" [af84ff1a-6e81-4b50-af3e-1b008c8628be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af84ff1a-6e81-4b50-af3e-1b008c8628be] Running
E0730 03:32:59.797242 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:32:59.802534 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:32:59.812782 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:32:59.833013 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:32:59.873331 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:32:59.953686 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:33:00.114682 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:33:00.435148 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:33:01.076037 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:33:02.356635 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.008670606s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-211601 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-211601 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-211601 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028406125s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-211601 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-211601 --alsologtostderr -v=3
E0730 03:33:04.916832 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:33:09.696651 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
E0730 03:33:10.037098 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-211601 --alsologtostderr -v=3: (11.951607116s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-211601 -n default-k8s-diff-port-211601
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-211601 -n default-k8s-diff-port-211601: exit status 7 (72.043974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-211601 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-211601 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0730 03:33:20.277893 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:33:22.443551 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:33:40.758123 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:33:59.379729 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/kindnet-488354/client.crt: no such file or directory
E0730 03:34:04.855702 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
E0730 03:34:07.149308 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/addons-261813/client.crt: no such file or directory
E0730 03:34:21.719284 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:34:21.808483 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/functional-359379/client.crt: no such file or directory
E0730 03:34:24.775365 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:34:52.460308 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/flannel-488354/client.crt: no such file or directory
E0730 03:35:01.249923 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:35:28.932370 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/bridge-488354/client.crt: no such file or directory
E0730 03:35:43.640076 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
E0730 03:35:48.682196 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/calico-488354/client.crt: no such file or directory
E0730 03:36:44.851780 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:36:44.857073 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:36:44.867372 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:36:44.887614 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:36:44.927854 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:36:45.011572 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:36:45.172101 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:36:45.492694 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:36:46.133258 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:36:47.413727 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:36:49.974097 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-211601 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m49.236382861s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-211601 -n default-k8s-diff-port-211601
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-bj4zs" [63475b80-266c-41b9-8ed2-51506b5b3fb7] Running
E0730 03:36:55.094365 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.022367812s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-bj4zs" [63475b80-266c-41b9-8ed2-51506b5b3fb7] Running
E0730 03:37:05.335033 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003512174s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-885909 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-885909 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-885909 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-885909 -n embed-certs-885909
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-885909 -n embed-certs-885909: exit status 2 (321.672696ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-885909 -n embed-certs-885909
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-885909 -n embed-certs-885909: exit status 2 (315.521522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-885909 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-885909 -n embed-certs-885909
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-885909 -n embed-certs-885909
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-204831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0730 03:37:22.854486 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/auto-488354/client.crt: no such file or directory
E0730 03:37:25.815399 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
E0730 03:37:42.005156 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/custom-flannel-488354/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-204831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (37.745123427s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-204831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-204831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.295338631s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-204831 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-204831 --alsologtostderr -v=3: (1.296063143s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-204831 -n newest-cni-204831
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-204831 -n newest-cni-204831: exit status 7 (64.367055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-204831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-204831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0730 03:37:54.758919 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/enable-default-cni-488354/client.crt: no such file or directory
E0730 03:37:59.797133 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/old-k8s-version-842024/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-204831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (16.144567024s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-204831 -n newest-cni-204831
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-m78p2" [a3a97fac-d319-427c-b7e0-e859607b6d3e] Running
E0730 03:38:06.776043 1597958 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/no-preload-766543/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004518301s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-204831 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-204831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-204831 -n newest-cni-204831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-204831 -n newest-cni-204831: exit status 2 (300.981006ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-204831 -n newest-cni-204831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-204831 -n newest-cni-204831: exit status 2 (328.316838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-204831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-204831 -n newest-cni-204831
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-204831 -n newest-cni-204831
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-m78p2" [a3a97fac-d319-427c-b7e0-e859607b6d3e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004442392s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-211601 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-211601 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-211601 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-211601 -n default-k8s-diff-port-211601
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-211601 -n default-k8s-diff-port-211601: exit status 2 (321.526269ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-211601 -n default-k8s-diff-port-211601
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-211601 -n default-k8s-diff-port-211601: exit status 2 (304.57733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-211601 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-211601 -n default-k8s-diff-port-211601
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-211601 -n default-k8s-diff-port-211601
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.85s)

                                                
                                    

Test skip (33/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-154092 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-154092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-154092
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-488354 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-488354" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 30 Jul 2024 03:09:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-436007
contexts:
- context:
cluster: pause-436007
extensions:
- extension:
last-update: Tue, 30 Jul 2024 03:09:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-436007
name: pause-436007
current-context: pause-436007
kind: Config
preferences: {}
users:
- name: pause-436007
user:
client-certificate: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/pause-436007/client.crt
client-key: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/pause-436007/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-488354

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-488354"

                                                
                                                
----------------------- debugLogs end: kubenet-488354 [took: 5.349047313s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-488354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-488354
--- SKIP: TestNetworkPlugins/group/kubenet (5.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-488354 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-488354" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19348-1592571/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 30 Jul 2024 03:09:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-436007
contexts:
- context:
cluster: pause-436007
extensions:
- extension:
last-update: Tue, 30 Jul 2024 03:09:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-436007
name: pause-436007
current-context: pause-436007
kind: Config
preferences: {}
users:
- name: pause-436007
user:
client-certificate: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/pause-436007/client.crt
client-key: /home/jenkins/minikube-integration/19348-1592571/.minikube/profiles/pause-436007/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-488354

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-488354" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-488354"

                                                
                                                
----------------------- debugLogs end: cilium-488354 [took: 3.646307994s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-488354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-488354
--- SKIP: TestNetworkPlugins/group/cilium (3.80s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-072029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-072029
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard