Test Report: Docker_Linux_crio_arm64 19640

                    
                      e5b440675da001c9bcd97e7df406aef1ef05cbc8:2024-09-14:36202
                    
                

Test fail (4/328)

Order failed test Duration
33 TestAddons/parallel/Registry 74.42
34 TestAddons/parallel/Ingress 153.11
36 TestAddons/parallel/MetricsServer 348.43
174 TestMultiControlPlane/serial/RestartCluster 128.43
x
+
TestAddons/parallel/Registry (74.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.726552ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-bkhkl" [4d931f29-d87c-4bc8-8e58-88b441e56b0a] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004680214s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fb2vb" [7d63ca1e-f5bf-47eb-84af-ebd01e9cd4b6] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003429973s
addons_test.go:342: (dbg) Run:  kubectl --context addons-885748 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-885748 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-885748 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.103288671s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-885748 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 ip
2024/09/14 00:48:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-885748
helpers_test.go:235: (dbg) docker inspect addons-885748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a",
	        "Created": "2024-09-14T00:35:51.693021132Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 875338,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-14T00:35:51.852610858Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fe3365929e6ce54b4c06f0bc3d1500dff08f535844ef4978f2c45cd67c542134",
	        "ResolvConfPath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/hostname",
	        "HostsPath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/hosts",
	        "LogPath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a-json.log",
	        "Name": "/addons-885748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-885748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-885748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee-init/diff:/var/lib/docker/overlay2/75b2121147f32424fffc5e50d2609c96cf2fdc411273d8660afbb09b8a3ad07a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-885748",
	                "Source": "/var/lib/docker/volumes/addons-885748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-885748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-885748",
	                "name.minikube.sigs.k8s.io": "addons-885748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1a2274d2fe074b454d8fc13c1575d8f017a8d3113ed94af95faf9d1d2583971",
	            "SandboxKey": "/var/run/docker/netns/b1a2274d2fe0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33564"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33565"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33568"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33566"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33567"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-885748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c1a0d21fd124d60633c329f0674dc6666a0292fe6f6b1be172c6bb2b7fa6a718",
	                    "EndpointID": "ce9472c43f8e5b4bcc4e1fe669f69274e4050166515b932738a2ad8472c5184d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-885748",
	                        "16a9106e2bf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-885748 -n addons-885748
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-885748 logs -n 25: (1.908322133s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-116392   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | -p download-only-116392              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| delete  | -p download-only-116392              | download-only-116392   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| start   | -o=json --download-only              | download-only-396021   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | -p download-only-396021              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| delete  | -p download-only-396021              | download-only-396021   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| delete  | -p download-only-116392              | download-only-116392   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| delete  | -p download-only-396021              | download-only-396021   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| start   | --download-only -p                   | download-docker-830102 | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | download-docker-830102               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-830102            | download-docker-830102 | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| start   | --download-only -p                   | binary-mirror-918324   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | binary-mirror-918324                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44679               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-918324              | binary-mirror-918324   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| addons  | disable dashboard -p                 | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | addons-885748                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | addons-885748                        |                        |         |         |                     |                     |
	| start   | -p addons-885748 --wait=true         | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:39 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-885748 addons                 | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:47 UTC | 14 Sep 24 00:47 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-885748 addons                 | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:47 UTC | 14 Sep 24 00:47 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-885748 addons disable         | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| ip      | addons-885748 ip                     | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	| addons  | addons-885748 addons disable         | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | -p addons-885748                     |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:35:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:35:27.648597  874848 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:35:27.648788  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:27.648825  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:35:27.648839  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:27.649116  874848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 00:35:27.649628  874848 out.go:352] Setting JSON to false
	I0914 00:35:27.650620  874848 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15472,"bootTime":1726258656,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0914 00:35:27.650703  874848 start.go:139] virtualization:  
	I0914 00:35:27.652331  874848 out.go:177] * [addons-885748] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 00:35:27.654538  874848 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:35:27.654641  874848 notify.go:220] Checking for updates...
	I0914 00:35:27.657216  874848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:35:27.658569  874848 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 00:35:27.659690  874848 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	I0914 00:35:27.661085  874848 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 00:35:27.662124  874848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:35:27.663629  874848 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:35:27.685055  874848 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:35:27.685194  874848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:27.743708  874848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:35:27.734595728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:27.743818  874848 docker.go:318] overlay module found
	I0914 00:35:27.746292  874848 out.go:177] * Using the docker driver based on user configuration
	I0914 00:35:27.747376  874848 start.go:297] selected driver: docker
	I0914 00:35:27.747391  874848 start.go:901] validating driver "docker" against <nil>
	I0914 00:35:27.747405  874848 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:35:27.748035  874848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:27.802291  874848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:35:27.792988752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:27.802504  874848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:35:27.802746  874848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:35:27.803994  874848 out.go:177] * Using Docker driver with root privileges
	I0914 00:35:27.804986  874848 cni.go:84] Creating CNI manager for ""
	I0914 00:35:27.805046  874848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:35:27.805057  874848 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 00:35:27.805150  874848 start.go:340] cluster config:
	{Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:35:27.807346  874848 out.go:177] * Starting "addons-885748" primary control-plane node in "addons-885748" cluster
	I0914 00:35:27.808476  874848 cache.go:121] Beginning downloading kic base image for docker with crio
	I0914 00:35:27.809606  874848 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0914 00:35:27.810871  874848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:27.810920  874848 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0914 00:35:27.810932  874848 cache.go:56] Caching tarball of preloaded images
	I0914 00:35:27.810960  874848 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 00:35:27.811021  874848 preload.go:172] Found /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 00:35:27.811031  874848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:35:27.811392  874848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/config.json ...
	I0914 00:35:27.811450  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/config.json: {Name:mk574a8eb9ef8f9e3b261644b0ca0e71c6fc48e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:27.826453  874848 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 00:35:27.826558  874848 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 00:35:27.826581  874848 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 00:35:27.826586  874848 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 00:35:27.826598  874848 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 00:35:27.826604  874848 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0914 00:35:44.803607  874848 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0914 00:35:44.803643  874848 cache.go:194] Successfully downloaded all kic artifacts
	I0914 00:35:44.803673  874848 start.go:360] acquireMachinesLock for addons-885748: {Name:mk9ddda16eaf26a40c295d659f1e42acd6143125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:35:44.803799  874848 start.go:364] duration metric: took 104.539µs to acquireMachinesLock for "addons-885748"
	I0914 00:35:44.803830  874848 start.go:93] Provisioning new machine with config: &{Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:35:44.803926  874848 start.go:125] createHost starting for "" (driver="docker")
	I0914 00:35:44.805508  874848 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0914 00:35:44.805767  874848 start.go:159] libmachine.API.Create for "addons-885748" (driver="docker")
	I0914 00:35:44.805803  874848 client.go:168] LocalClient.Create starting
	I0914 00:35:44.805931  874848 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem
	I0914 00:35:45.234194  874848 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem
	I0914 00:35:45.623675  874848 cli_runner.go:164] Run: docker network inspect addons-885748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 00:35:45.638888  874848 cli_runner.go:211] docker network inspect addons-885748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 00:35:45.638974  874848 network_create.go:284] running [docker network inspect addons-885748] to gather additional debugging logs...
	I0914 00:35:45.638996  874848 cli_runner.go:164] Run: docker network inspect addons-885748
	W0914 00:35:45.653957  874848 cli_runner.go:211] docker network inspect addons-885748 returned with exit code 1
	I0914 00:35:45.653988  874848 network_create.go:287] error running [docker network inspect addons-885748]: docker network inspect addons-885748: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-885748 not found
	I0914 00:35:45.654007  874848 network_create.go:289] output of [docker network inspect addons-885748]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-885748 not found
	
	** /stderr **
	I0914 00:35:45.654106  874848 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 00:35:45.672611  874848 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400048fee0}
	I0914 00:35:45.672659  874848 network_create.go:124] attempt to create docker network addons-885748 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 00:35:45.672715  874848 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-885748 addons-885748
	I0914 00:35:45.738461  874848 network_create.go:108] docker network addons-885748 192.168.49.0/24 created
	I0914 00:35:45.738494  874848 kic.go:121] calculated static IP "192.168.49.2" for the "addons-885748" container
	I0914 00:35:45.738570  874848 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 00:35:45.753096  874848 cli_runner.go:164] Run: docker volume create addons-885748 --label name.minikube.sigs.k8s.io=addons-885748 --label created_by.minikube.sigs.k8s.io=true
	I0914 00:35:45.768446  874848 oci.go:103] Successfully created a docker volume addons-885748
	I0914 00:35:45.768544  874848 cli_runner.go:164] Run: docker run --rm --name addons-885748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885748 --entrypoint /usr/bin/test -v addons-885748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib
	I0914 00:35:47.532910  874848 cli_runner.go:217] Completed: docker run --rm --name addons-885748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885748 --entrypoint /usr/bin/test -v addons-885748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib: (1.764328482s)
	I0914 00:35:47.532939  874848 oci.go:107] Successfully prepared a docker volume addons-885748
	I0914 00:35:47.532965  874848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:47.532986  874848 kic.go:194] Starting extracting preloaded images to volume ...
	I0914 00:35:47.533050  874848 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-885748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 00:35:51.627808  874848 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-885748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir: (4.094718795s)
	I0914 00:35:51.627842  874848 kic.go:203] duration metric: took 4.094852633s to extract preloaded images to volume ...
	W0914 00:35:51.627991  874848 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 00:35:51.628114  874848 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 00:35:51.679472  874848 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-885748 --name addons-885748 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885748 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-885748 --network addons-885748 --ip 192.168.49.2 --volume addons-885748:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243
	I0914 00:35:52.026413  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Running}}
	I0914 00:35:52.054130  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:35:52.081150  874848 cli_runner.go:164] Run: docker exec addons-885748 stat /var/lib/dpkg/alternatives/iptables
	I0914 00:35:52.151645  874848 oci.go:144] the created container "addons-885748" has a running status.
	I0914 00:35:52.151674  874848 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa...
	I0914 00:35:52.411723  874848 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 00:35:52.437353  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:35:52.459127  874848 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 00:35:52.459149  874848 kic_runner.go:114] Args: [docker exec --privileged addons-885748 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 00:35:52.535444  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:35:52.561504  874848 machine.go:93] provisionDockerMachine start ...
	I0914 00:35:52.561596  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:52.592426  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:52.592702  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:52.592718  874848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 00:35:52.593577  874848 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48640->127.0.0.1:33564: read: connection reset by peer
	I0914 00:35:55.712678  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-885748
	
	I0914 00:35:55.712704  874848 ubuntu.go:169] provisioning hostname "addons-885748"
	I0914 00:35:55.712793  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:55.730083  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:55.730330  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:55.730355  874848 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-885748 && echo "addons-885748" | sudo tee /etc/hostname
	I0914 00:35:55.863937  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-885748
	
	I0914 00:35:55.864025  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:55.884479  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:55.884728  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:55.884753  874848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-885748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-885748/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-885748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:35:56.006206  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:35:56.006299  874848 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-868698/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-868698/.minikube}
	I0914 00:35:56.006369  874848 ubuntu.go:177] setting up certificates
	I0914 00:35:56.006397  874848 provision.go:84] configureAuth start
	I0914 00:35:56.006497  874848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885748
	I0914 00:35:56.025656  874848 provision.go:143] copyHostCerts
	I0914 00:35:56.025744  874848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem (1078 bytes)
	I0914 00:35:56.025874  874848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem (1123 bytes)
	I0914 00:35:56.025946  874848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem (1679 bytes)
	I0914 00:35:56.026001  874848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem org=jenkins.addons-885748 san=[127.0.0.1 192.168.49.2 addons-885748 localhost minikube]
	I0914 00:35:56.397039  874848 provision.go:177] copyRemoteCerts
	I0914 00:35:56.397111  874848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:35:56.397152  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.413576  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:56.502071  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:35:56.525597  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:35:56.549087  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 00:35:56.572443  874848 provision.go:87] duration metric: took 566.020273ms to configureAuth
	I0914 00:35:56.572469  874848 ubuntu.go:193] setting minikube options for container-runtime
	I0914 00:35:56.572641  874848 config.go:182] Loaded profile config "addons-885748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:35:56.572750  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.589020  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:56.589468  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:56.589494  874848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:35:56.813689  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:35:56.813714  874848 machine.go:96] duration metric: took 4.252187622s to provisionDockerMachine
	I0914 00:35:56.813724  874848 client.go:171] duration metric: took 12.007912s to LocalClient.Create
	I0914 00:35:56.813737  874848 start.go:167] duration metric: took 12.007978992s to libmachine.API.Create "addons-885748"
	I0914 00:35:56.813745  874848 start.go:293] postStartSetup for "addons-885748" (driver="docker")
	I0914 00:35:56.813756  874848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:35:56.813824  874848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:35:56.813884  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.830802  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:56.918469  874848 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:35:56.921566  874848 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 00:35:56.921600  874848 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 00:35:56.921611  874848 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 00:35:56.921619  874848 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 00:35:56.921629  874848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/addons for local assets ...
	I0914 00:35:56.921700  874848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/files for local assets ...
	I0914 00:35:56.921730  874848 start.go:296] duration metric: took 107.979103ms for postStartSetup
	I0914 00:35:56.922050  874848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885748
	I0914 00:35:56.937996  874848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/config.json ...
	I0914 00:35:56.938300  874848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:35:56.938349  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.957478  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:57.042229  874848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 00:35:57.047056  874848 start.go:128] duration metric: took 12.243112242s to createHost
	I0914 00:35:57.047078  874848 start.go:83] releasing machines lock for "addons-885748", held for 12.243266454s
	I0914 00:35:57.047155  874848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885748
	I0914 00:35:57.063313  874848 ssh_runner.go:195] Run: cat /version.json
	I0914 00:35:57.063378  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:57.063655  874848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:35:57.063724  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:57.084371  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:57.094261  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:57.308581  874848 ssh_runner.go:195] Run: systemctl --version
	I0914 00:35:57.312939  874848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:35:57.451620  874848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 00:35:57.455973  874848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:35:57.477002  874848 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 00:35:57.477132  874848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:35:57.511110  874848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 00:35:57.511137  874848 start.go:495] detecting cgroup driver to use...
	I0914 00:35:57.511169  874848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 00:35:57.511217  874848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:35:57.526481  874848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:35:57.538293  874848 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:35:57.538364  874848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:35:57.552686  874848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:35:57.568072  874848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:35:57.662991  874848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:35:57.755248  874848 docker.go:233] disabling docker service ...
	I0914 00:35:57.755320  874848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:35:57.774750  874848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:35:57.786925  874848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:35:57.878521  874848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:35:57.968297  874848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:35:57.980122  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:35:57.996615  874848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 00:35:57.996733  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.007909  874848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:35:58.008088  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.019602  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.030797  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.040901  874848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:35:58.051366  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.061514  874848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.077469  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.087600  874848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:35:58.096431  874848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:35:58.104922  874848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:35:58.194238  874848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:35:58.315200  874848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:35:58.315290  874848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:35:58.319115  874848 start.go:563] Will wait 60s for crictl version
	I0914 00:35:58.319183  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:35:58.322590  874848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:35:58.360321  874848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 00:35:58.360485  874848 ssh_runner.go:195] Run: crio --version
	I0914 00:35:58.401355  874848 ssh_runner.go:195] Run: crio --version
	I0914 00:35:58.441347  874848 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0914 00:35:58.443849  874848 cli_runner.go:164] Run: docker network inspect addons-885748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 00:35:58.459835  874848 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 00:35:58.463371  874848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:35:58.473895  874848 kubeadm.go:883] updating cluster {Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:35:58.474017  874848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:58.474077  874848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:35:58.547909  874848 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:35:58.547932  874848 crio.go:433] Images already preloaded, skipping extraction
	I0914 00:35:58.547987  874848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:35:58.584064  874848 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:35:58.584085  874848 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:35:58.584094  874848 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0914 00:35:58.584187  874848 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-885748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:35:58.584272  874848 ssh_runner.go:195] Run: crio config
	I0914 00:35:58.630750  874848 cni.go:84] Creating CNI manager for ""
	I0914 00:35:58.630773  874848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:35:58.630784  874848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:35:58.630808  874848 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-885748 NodeName:addons-885748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:35:58.630990  874848 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-885748"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:35:58.631062  874848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:35:58.639996  874848 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:35:58.640108  874848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:35:58.648765  874848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0914 00:35:58.666409  874848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:35:58.684328  874848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0914 00:35:58.702308  874848 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 00:35:58.705701  874848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:35:58.716106  874848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:35:58.806646  874848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:35:58.820194  874848 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748 for IP: 192.168.49.2
	I0914 00:35:58.820228  874848 certs.go:194] generating shared ca certs ...
	I0914 00:35:58.820260  874848 certs.go:226] acquiring lock for ca certs: {Name:mk51aad7f25871620dee3805dbb159a74d927d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:58.821048  874848 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key
	I0914 00:35:59.115008  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt ...
	I0914 00:35:59.115046  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt: {Name:mk7e420a6f4116f40ba205310e9949cc0a07cff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.115273  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key ...
	I0914 00:35:59.115289  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key: {Name:mk6495fd05c501516a1dbc6a3c5a3d111749eaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.115383  874848 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key
	I0914 00:35:59.669563  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt ...
	I0914 00:35:59.669645  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt: {Name:mk74326826b78a79963a2466e661d640c5de6beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.670798  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key ...
	I0914 00:35:59.670831  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key: {Name:mkaa14c9fcec32cffb1eac0dcfd1682b507c2fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.671658  874848 certs.go:256] generating profile certs ...
	I0914 00:35:59.671756  874848 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.key
	I0914 00:35:59.671786  874848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt with IP's: []
	I0914 00:36:00.652822  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt ...
	I0914 00:36:00.652865  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: {Name:mk1fbf9bed840a2d57fd0d4fd8e94a75ab019179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.653669  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.key ...
	I0914 00:36:00.653689  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.key: {Name:mkbe4a15da3a2ff3d45a92e0a1634742aa384a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.654315  874848 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37
	I0914 00:36:00.654340  874848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0914 00:36:00.819327  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37 ...
	I0914 00:36:00.819359  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37: {Name:mk886299dc91db0af4189545598b67789e917e31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.820194  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37 ...
	I0914 00:36:00.820213  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37: {Name:mk8b5789c23e69638787fc7a9959d1efbdaf2020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.820297  874848 certs.go:381] copying /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37 -> /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt
	I0914 00:36:00.820377  874848 certs.go:385] copying /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37 -> /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key
	I0914 00:36:00.820432  874848 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key
	I0914 00:36:00.820453  874848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt with IP's: []
	I0914 00:36:01.002520  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt ...
	I0914 00:36:01.002560  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt: {Name:mkb7b3d55ccc68a6a5b5150959ff889ebad35b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:01.002757  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key ...
	I0914 00:36:01.002770  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key: {Name:mk1ef6af0211d101b3583380a03915d2b95c5f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:01.003925  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 00:36:01.003979  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:36:01.004010  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:36:01.004036  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem (1679 bytes)
	I0914 00:36:01.004717  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:36:01.031940  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 00:36:01.056903  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:36:01.083162  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:36:01.111392  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 00:36:01.147185  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:36:01.177160  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:36:01.205911  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:36:01.233312  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:36:01.259299  874848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:36:01.278922  874848 ssh_runner.go:195] Run: openssl version
	I0914 00:36:01.284640  874848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:36:01.296674  874848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:36:01.300328  874848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 00:35 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:36:01.300461  874848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:36:01.307711  874848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:36:01.317582  874848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:36:01.320964  874848 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 00:36:01.321016  874848 kubeadm.go:392] StartCluster: {Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:36:01.321148  874848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:36:01.321218  874848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:36:01.359180  874848 cri.go:89] found id: ""
	I0914 00:36:01.359294  874848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 00:36:01.368589  874848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 00:36:01.378278  874848 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0914 00:36:01.378369  874848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:36:01.388604  874848 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:36:01.388628  874848 kubeadm.go:157] found existing configuration files:
	
	I0914 00:36:01.388687  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:36:01.397916  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:36:01.398044  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:36:01.407970  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:36:01.418575  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:36:01.418702  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:36:01.428387  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:36:01.437829  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:36:01.437915  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:36:01.446922  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:36:01.456143  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:36:01.456266  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:36:01.465388  874848 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 00:36:01.505666  874848 kubeadm.go:310] W0914 00:36:01.504983    1183 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:36:01.506942  874848 kubeadm.go:310] W0914 00:36:01.506340    1183 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:36:01.533882  874848 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0914 00:36:01.596564  874848 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:36:18.463208  874848 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 00:36:18.463272  874848 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 00:36:18.463364  874848 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0914 00:36:18.463422  874848 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0914 00:36:18.463465  874848 kubeadm.go:310] OS: Linux
	I0914 00:36:18.463513  874848 kubeadm.go:310] CGROUPS_CPU: enabled
	I0914 00:36:18.463569  874848 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0914 00:36:18.463623  874848 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0914 00:36:18.463685  874848 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0914 00:36:18.463738  874848 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0914 00:36:18.463797  874848 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0914 00:36:18.463846  874848 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0914 00:36:18.463898  874848 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0914 00:36:18.463954  874848 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0914 00:36:18.464031  874848 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 00:36:18.464129  874848 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 00:36:18.464222  874848 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 00:36:18.464287  874848 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 00:36:18.468842  874848 out.go:235]   - Generating certificates and keys ...
	I0914 00:36:18.468938  874848 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 00:36:18.469010  874848 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 00:36:18.469086  874848 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 00:36:18.469154  874848 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 00:36:18.469218  874848 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 00:36:18.469284  874848 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 00:36:18.469347  874848 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 00:36:18.469472  874848 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-885748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 00:36:18.469529  874848 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 00:36:18.469646  874848 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-885748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 00:36:18.469714  874848 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 00:36:18.469780  874848 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 00:36:18.469827  874848 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 00:36:18.469885  874848 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 00:36:18.469939  874848 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 00:36:18.469998  874848 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 00:36:18.470057  874848 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 00:36:18.470123  874848 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 00:36:18.470180  874848 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 00:36:18.470264  874848 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 00:36:18.470332  874848 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 00:36:18.472924  874848 out.go:235]   - Booting up control plane ...
	I0914 00:36:18.473034  874848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 00:36:18.473114  874848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 00:36:18.473210  874848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 00:36:18.473402  874848 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 00:36:18.473492  874848 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 00:36:18.473540  874848 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 00:36:18.473674  874848 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 00:36:18.473785  874848 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 00:36:18.473846  874848 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000717079s
	I0914 00:36:18.473919  874848 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 00:36:18.473979  874848 kubeadm.go:310] [api-check] The API server is healthy after 6.001506819s
	I0914 00:36:18.474086  874848 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 00:36:18.474212  874848 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 00:36:18.474272  874848 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 00:36:18.474458  874848 kubeadm.go:310] [mark-control-plane] Marking the node addons-885748 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 00:36:18.474517  874848 kubeadm.go:310] [bootstrap-token] Using token: d5jq5w.vhxle95wpku6sua3
	I0914 00:36:18.477217  874848 out.go:235]   - Configuring RBAC rules ...
	I0914 00:36:18.477426  874848 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 00:36:18.477516  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 00:36:18.477659  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 00:36:18.477798  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 00:36:18.477917  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 00:36:18.478005  874848 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 00:36:18.478122  874848 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 00:36:18.478169  874848 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 00:36:18.478217  874848 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 00:36:18.478225  874848 kubeadm.go:310] 
	I0914 00:36:18.478284  874848 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 00:36:18.478295  874848 kubeadm.go:310] 
	I0914 00:36:18.478372  874848 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 00:36:18.478380  874848 kubeadm.go:310] 
	I0914 00:36:18.478405  874848 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 00:36:18.478483  874848 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 00:36:18.478539  874848 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 00:36:18.478547  874848 kubeadm.go:310] 
	I0914 00:36:18.478601  874848 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 00:36:18.478608  874848 kubeadm.go:310] 
	I0914 00:36:18.478659  874848 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 00:36:18.478666  874848 kubeadm.go:310] 
	I0914 00:36:18.478718  874848 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 00:36:18.478796  874848 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 00:36:18.478865  874848 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 00:36:18.478872  874848 kubeadm.go:310] 
	I0914 00:36:18.478956  874848 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 00:36:18.479036  874848 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 00:36:18.479043  874848 kubeadm.go:310] 
	I0914 00:36:18.479127  874848 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d5jq5w.vhxle95wpku6sua3 \
	I0914 00:36:18.479234  874848 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57751d36d4a8735ba13dc9bb14d661ba8c23675462a620d84c252b50ebcb21ac \
	I0914 00:36:18.479257  874848 kubeadm.go:310] 	--control-plane 
	I0914 00:36:18.479264  874848 kubeadm.go:310] 
	I0914 00:36:18.479348  874848 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 00:36:18.479356  874848 kubeadm.go:310] 
	I0914 00:36:18.479437  874848 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d5jq5w.vhxle95wpku6sua3 \
	I0914 00:36:18.479556  874848 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57751d36d4a8735ba13dc9bb14d661ba8c23675462a620d84c252b50ebcb21ac 
	I0914 00:36:18.479573  874848 cni.go:84] Creating CNI manager for ""
	I0914 00:36:18.479580  874848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:36:18.482414  874848 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 00:36:18.485202  874848 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 00:36:18.488984  874848 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0914 00:36:18.489018  874848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0914 00:36:18.507633  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 00:36:18.797119  874848 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:36:18.797283  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:18.797372  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-885748 minikube.k8s.io/updated_at=2024_09_14T00_36_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=addons-885748 minikube.k8s.io/primary=true
	I0914 00:36:18.977875  874848 ops.go:34] apiserver oom_adj: -16
	I0914 00:36:18.977984  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:19.478709  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:19.978932  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:20.478468  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:20.978465  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:21.478838  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:21.978427  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:22.478979  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:22.570954  874848 kubeadm.go:1113] duration metric: took 3.773734503s to wait for elevateKubeSystemPrivileges
	I0914 00:36:22.570993  874848 kubeadm.go:394] duration metric: took 21.249981733s to StartCluster
	I0914 00:36:22.571028  874848 settings.go:142] acquiring lock: {Name:mk58b1b9b697202ac4a931cd839962dd8a5a8fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:22.571754  874848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 00:36:22.572140  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/kubeconfig: {Name:mk4bce51b3b1a0b5e086688a43a01615410b8350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:22.572375  874848 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:36:22.572521  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 00:36:22.572784  874848 config.go:182] Loaded profile config "addons-885748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:36:22.572823  874848 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 00:36:22.572908  874848 addons.go:69] Setting yakd=true in profile "addons-885748"
	I0914 00:36:22.572928  874848 addons.go:234] Setting addon yakd=true in "addons-885748"
	I0914 00:36:22.572954  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.573611  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.573704  874848 addons.go:69] Setting inspektor-gadget=true in profile "addons-885748"
	I0914 00:36:22.573723  874848 addons.go:234] Setting addon inspektor-gadget=true in "addons-885748"
	I0914 00:36:22.573749  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.574184  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.574511  874848 addons.go:69] Setting cloud-spanner=true in profile "addons-885748"
	I0914 00:36:22.574548  874848 addons.go:234] Setting addon cloud-spanner=true in "addons-885748"
	I0914 00:36:22.574580  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.574991  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.576584  874848 addons.go:69] Setting metrics-server=true in profile "addons-885748"
	I0914 00:36:22.576658  874848 addons.go:234] Setting addon metrics-server=true in "addons-885748"
	I0914 00:36:22.576806  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.577674  874848 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-885748"
	I0914 00:36:22.577697  874848 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-885748"
	I0914 00:36:22.577728  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.578157  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.578583  874848 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-885748"
	I0914 00:36:22.578673  874848 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-885748"
	I0914 00:36:22.578735  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.579310  874848 addons.go:69] Setting default-storageclass=true in profile "addons-885748"
	I0914 00:36:22.579360  874848 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-885748"
	I0914 00:36:22.579654  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.584636  874848 addons.go:69] Setting registry=true in profile "addons-885748"
	I0914 00:36:22.584679  874848 addons.go:234] Setting addon registry=true in "addons-885748"
	I0914 00:36:22.584722  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.585212  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.593639  874848 addons.go:69] Setting gcp-auth=true in profile "addons-885748"
	I0914 00:36:22.593697  874848 mustload.go:65] Loading cluster: addons-885748
	I0914 00:36:22.593833  874848 addons.go:69] Setting ingress=true in profile "addons-885748"
	I0914 00:36:22.593875  874848 addons.go:234] Setting addon ingress=true in "addons-885748"
	I0914 00:36:22.593947  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.594556  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.598655  874848 addons.go:69] Setting storage-provisioner=true in profile "addons-885748"
	I0914 00:36:22.598696  874848 addons.go:234] Setting addon storage-provisioner=true in "addons-885748"
	I0914 00:36:22.598738  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.599322  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.608030  874848 addons.go:69] Setting ingress-dns=true in profile "addons-885748"
	I0914 00:36:22.608066  874848 addons.go:234] Setting addon ingress-dns=true in "addons-885748"
	I0914 00:36:22.608124  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.608719  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.626438  874848 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-885748"
	I0914 00:36:22.626484  874848 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-885748"
	I0914 00:36:22.627015  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.646668  874848 out.go:177] * Verifying Kubernetes components...
	I0914 00:36:22.646964  874848 addons.go:69] Setting volcano=true in profile "addons-885748"
	I0914 00:36:22.646995  874848 addons.go:234] Setting addon volcano=true in "addons-885748"
	I0914 00:36:22.647044  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.647621  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.705055  874848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:36:22.647935  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.727451  874848 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 00:36:22.730961  874848 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 00:36:22.731026  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 00:36:22.731127  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.648200  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.654108  874848 config.go:182] Loaded profile config "addons-885748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:36:22.761663  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.769083  874848 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 00:36:22.764527  874848 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-885748"
	I0914 00:36:22.769445  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.769907  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.656713  874848 addons.go:69] Setting volumesnapshots=true in profile "addons-885748"
	I0914 00:36:22.779473  874848 addons.go:234] Setting addon volumesnapshots=true in "addons-885748"
	I0914 00:36:22.779518  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.779996  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.792421  874848 addons.go:234] Setting addon default-storageclass=true in "addons-885748"
	I0914 00:36:22.792474  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.793016  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.798974  874848 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 00:36:22.798996  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 00:36:22.799056  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.807827  874848 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 00:36:22.808051  874848 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 00:36:22.813221  874848 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 00:36:22.818788  874848 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:36:22.821693  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:36:22.821715  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 00:36:22.836132  874848 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 00:36:22.836202  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.857667  874848 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:36:22.857695  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 00:36:22.857766  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.833063  874848 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 00:36:22.861715  874848 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 00:36:22.861794  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.901334  874848 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 00:36:22.901725  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:36:22.925512  874848 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 00:36:22.925579  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 00:36:22.925682  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.938211  874848 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	W0914 00:36:22.945665  874848 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0914 00:36:22.970824  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0914 00:36:22.981537  874848 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 00:36:22.981638  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0914 00:36:22.981747  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.948860  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 00:36:23.001198  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.002556  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:23.010390  874848 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 00:36:23.010488  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 00:36:23.010589  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.015908  874848 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 00:36:23.020213  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 00:36:23.020313  874848 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 00:36:23.020408  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.010624  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 00:36:23.028044  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 00:36:23.029030  874848 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 00:36:23.029111  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 00:36:23.030953  874848 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 00:36:23.031021  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.030844  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.040780  874848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:36:23.041521  874848 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 00:36:23.041540  874848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 00:36:23.041615  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.043116  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 00:36:23.045748  874848 out.go:177]   - Using image docker.io/busybox:stable
	I0914 00:36:23.057729  874848 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 00:36:23.057758  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 00:36:23.057830  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.064630  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 00:36:23.067439  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 00:36:23.070090  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 00:36:23.072657  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 00:36:23.077385  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 00:36:23.080086  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 00:36:23.082722  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 00:36:23.082752  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 00:36:23.082824  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.102635  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.164403  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.164493  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.181637  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.210074  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.221553  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.231587  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.243309  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.245585  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.246302  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.254741  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.413347  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 00:36:23.499318  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 00:36:23.563619  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 00:36:23.563646  874848 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 00:36:23.621774  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:36:23.632161  874848 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 00:36:23.632237  874848 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 00:36:23.658865  874848 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 00:36:23.658965  874848 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 00:36:23.678416  874848 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 00:36:23.678502  874848 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 00:36:23.687807  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 00:36:23.690302  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 00:36:23.690386  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 00:36:23.692343  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 00:36:23.741847  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 00:36:23.741869  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 00:36:23.753765  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 00:36:23.783717  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 00:36:23.783739  874848 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 00:36:23.798983  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 00:36:23.801823  874848 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 00:36:23.801892  874848 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 00:36:23.865228  874848 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 00:36:23.865305  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 00:36:23.896358  874848 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 00:36:23.896422  874848 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 00:36:23.908740  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 00:36:23.908812  874848 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 00:36:23.912605  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 00:36:23.912686  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 00:36:23.950755  874848 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 00:36:23.950831  874848 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 00:36:23.989143  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 00:36:23.989215  874848 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 00:36:24.045999  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 00:36:24.067005  874848 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 00:36:24.067084  874848 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 00:36:24.094537  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 00:36:24.094616  874848 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 00:36:24.121448  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 00:36:24.121549  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 00:36:24.152405  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 00:36:24.152477  874848 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 00:36:24.187995  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 00:36:24.188063  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 00:36:24.248301  874848 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 00:36:24.248379  874848 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 00:36:24.263656  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 00:36:24.263745  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 00:36:24.270468  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 00:36:24.279636  874848 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:36:24.279710  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 00:36:24.361088  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:36:24.363766  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 00:36:24.391930  874848 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 00:36:24.392005  874848 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 00:36:24.404762  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 00:36:24.404847  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 00:36:24.532083  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 00:36:24.532154  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 00:36:24.541629  874848 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 00:36:24.541706  874848 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 00:36:24.586534  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 00:36:24.586608  874848 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 00:36:24.613471  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 00:36:24.613547  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 00:36:24.635565  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 00:36:24.635636  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 00:36:24.636353  874848 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 00:36:24.636397  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 00:36:24.697087  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 00:36:24.718922  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 00:36:24.719002  874848 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 00:36:24.797070  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 00:36:26.012706  874848 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.019755946s)
	I0914 00:36:26.012793  874848 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 00:36:26.013955  874848 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.973152264s)
	I0914 00:36:26.015246  874848 node_ready.go:35] waiting up to 6m0s for node "addons-885748" to be "Ready" ...
	I0914 00:36:26.818351  874848 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-885748" context rescaled to 1 replicas
	I0914 00:36:27.031638  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.618251032s)
	I0914 00:36:27.031747  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.532359077s)
	I0914 00:36:27.943561  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.321702676s)
	I0914 00:36:28.026391  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:29.167249  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.4793608s)
	I0914 00:36:29.167283  874848 addons.go:475] Verifying addon ingress=true in "addons-885748"
	I0914 00:36:29.167350  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.47493769s)
	I0914 00:36:29.167560  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.413773964s)
	I0914 00:36:29.167638  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.36858762s)
	I0914 00:36:29.167755  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.121680395s)
	I0914 00:36:29.167774  874848 addons.go:475] Verifying addon registry=true in "addons-885748"
	I0914 00:36:29.168314  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.897753986s)
	I0914 00:36:29.168337  874848 addons.go:475] Verifying addon metrics-server=true in "addons-885748"
	I0914 00:36:29.170573  874848 out.go:177] * Verifying ingress addon...
	I0914 00:36:29.170589  874848 out.go:177] * Verifying registry addon...
	I0914 00:36:29.173514  874848 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 00:36:29.174597  874848 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 00:36:29.186358  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.825179108s)
	W0914 00:36:29.186394  874848 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 00:36:29.186416  874848 retry.go:31] will retry after 308.598821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 00:36:29.186470  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.822639917s)
	I0914 00:36:29.186790  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.489573307s)
	I0914 00:36:29.190776  874848 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-885748 service yakd-dashboard -n yakd-dashboard
	
	I0914 00:36:29.212739  874848 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 00:36:29.212822  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0914 00:36:29.216216  874848 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0914 00:36:29.217682  874848 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 00:36:29.217744  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:29.495756  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:36:29.512701  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.715530225s)
	I0914 00:36:29.512744  874848 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-885748"
	I0914 00:36:29.515691  874848 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 00:36:29.519486  874848 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 00:36:29.539384  874848 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 00:36:29.539410  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:29.680191  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:29.681532  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:30.038990  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:30.044266  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:30.207217  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:30.207796  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:30.532034  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:30.684159  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:30.690166  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:30.825068  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.329257891s)
	I0914 00:36:31.024298  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:31.191902  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:31.193352  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:31.523717  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:31.679556  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:31.679786  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:32.025958  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:32.177305  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:32.178350  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:32.520588  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:32.524000  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:32.680213  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:32.680810  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:33.030499  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:33.183678  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:33.184449  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:33.377273  874848 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 00:36:33.377351  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:33.401444  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:33.523097  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:33.527304  874848 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 00:36:33.559831  874848 addons.go:234] Setting addon gcp-auth=true in "addons-885748"
	I0914 00:36:33.559879  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:33.560345  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:33.581054  874848 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 00:36:33.581121  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:33.600889  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:33.680155  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:33.681074  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:33.693949  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:36:33.696552  874848 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 00:36:33.699092  874848 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 00:36:33.699119  874848 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 00:36:33.725491  874848 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 00:36:33.725513  874848 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 00:36:33.757659  874848 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 00:36:33.757696  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 00:36:33.780450  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 00:36:34.023974  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:34.184181  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:34.186273  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:34.384150  874848 addons.go:475] Verifying addon gcp-auth=true in "addons-885748"
	I0914 00:36:34.387248  874848 out.go:177] * Verifying gcp-auth addon...
	I0914 00:36:34.390886  874848 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 00:36:34.407033  874848 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 00:36:34.407059  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:34.523397  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:34.677151  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:34.678963  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:34.894681  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:35.022306  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:35.024438  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:35.180032  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:35.183480  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:35.394703  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:35.523865  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:35.678515  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:35.678806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:35.894818  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:36.023008  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:36.177908  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:36.178956  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:36.394280  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:36.523510  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:36.678580  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:36.679226  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:36.894421  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:37.022844  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:37.024157  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:37.179135  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:37.179370  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:37.394553  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:37.524074  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:37.678080  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:37.679783  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:37.894034  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:38.024946  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:38.177683  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:38.179187  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:38.394540  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:38.522543  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:38.677451  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:38.679196  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:38.894643  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:39.024177  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:39.176814  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:39.178126  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:39.394403  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:39.518971  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:39.523042  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:39.677285  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:39.678415  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:39.894993  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:40.023302  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:40.177076  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:40.179028  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:40.394726  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:40.523300  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:40.678856  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:40.679285  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:40.894199  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:41.023013  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:41.177101  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:41.179290  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:41.394521  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:41.523390  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:41.677524  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:41.678904  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:41.894216  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:42.019193  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:42.023697  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:42.179722  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:42.181177  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:42.394876  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:42.523050  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:42.678341  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:42.679685  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:42.894916  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:43.023233  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:43.178682  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:43.179104  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:43.393946  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:43.523203  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:43.678371  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:43.679407  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:43.894754  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:44.026113  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:44.026149  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:44.177508  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:44.178927  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:44.394341  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:44.523594  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:44.676754  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:44.678698  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:44.893862  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:45.036741  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:45.178582  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:45.179375  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:45.393987  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:45.523508  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:45.677652  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:45.679385  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:45.894863  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:46.022919  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:46.177463  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:46.179089  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:46.394354  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:46.519328  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:46.523445  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:46.681121  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:46.681456  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:46.894381  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:47.028265  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:47.176495  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:47.178087  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:47.394423  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:47.522613  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:47.678292  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:47.679289  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:47.894860  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:48.024213  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:48.179671  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:48.179824  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:48.394247  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:48.519429  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:48.523282  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:48.678115  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:48.678922  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:48.894375  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:49.023908  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:49.177643  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:49.178596  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:49.394010  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:49.522523  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:49.676673  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:49.678634  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:49.893972  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:50.022979  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:50.177122  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:50.178955  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:50.394118  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:50.522454  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:50.677309  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:50.679281  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:50.894463  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:51.018819  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:51.023156  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:51.176878  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:51.178776  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:51.394280  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:51.523133  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:51.677466  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:51.679143  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:51.894631  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:52.023396  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:52.178391  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:52.179266  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:52.397286  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:52.522690  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:52.678357  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:52.679217  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:52.894486  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:53.019119  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:53.023307  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:53.179178  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:53.180677  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:53.394368  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:53.523183  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:53.676964  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:53.678364  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:53.894958  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:54.023821  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:54.177805  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:54.179585  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:54.394992  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:54.522425  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:54.676902  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:54.679155  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:54.894649  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:55.019727  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:55.023310  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:55.178226  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:55.178305  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:55.394779  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:55.522925  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:55.678915  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:55.679410  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:55.894279  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:56.023547  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:56.177234  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:56.178937  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:56.394264  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:56.523247  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:56.677609  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:56.678834  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:56.894510  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:57.023730  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:57.176949  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:57.180346  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:57.394998  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:57.519266  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:57.522884  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:57.677959  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:57.678792  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:57.894825  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:58.023518  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:58.178392  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:58.179460  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:58.394911  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:58.522691  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:58.677369  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:58.678790  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:58.895085  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:59.022531  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:59.177915  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:59.178932  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:59.394803  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:59.522983  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:59.677060  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:59.678616  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:59.894389  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:00.020453  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:00.066148  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:00.187116  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:00.189373  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:00.395382  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:00.523090  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:00.677052  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:00.679313  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:00.894619  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:01.022440  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:01.177549  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:01.179048  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:01.394471  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:01.522894  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:01.678225  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:01.679589  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:01.894931  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:02.023563  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:02.176988  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:02.179706  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:02.394268  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:02.519079  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:02.522948  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:02.676989  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:02.679144  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:02.896226  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:03.022882  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:03.177959  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:03.179565  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:03.394297  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:03.523549  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:03.677616  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:03.679072  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:03.894507  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:04.023469  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:04.177349  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:04.179232  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:04.393784  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:04.522550  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:04.676854  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:04.678639  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:04.895316  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:05.018889  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:05.023416  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:05.177247  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:05.179114  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:05.394990  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:05.522362  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:05.678794  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:05.678970  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:05.893966  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:06.023096  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:06.177160  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:06.177654  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:06.394767  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:06.522308  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:06.678541  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:06.679024  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:06.894066  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:07.019192  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:07.023773  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:07.177818  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:07.178447  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:07.396991  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:07.536462  874848 node_ready.go:49] node "addons-885748" has status "Ready":"True"
	I0914 00:37:07.536540  874848 node_ready.go:38] duration metric: took 41.52122498s for node "addons-885748" to be "Ready" ...
	I0914 00:37:07.536564  874848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:37:07.545962  874848 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 00:37:07.545989  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:07.560429  874848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8m89r" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:07.735954  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:07.737075  874848 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 00:37:07.737140  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:07.904390  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:08.025045  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:08.203301  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:08.204003  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:08.398762  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:08.524366  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:08.680865  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:08.681279  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:08.900177  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:09.025214  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:09.185641  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:09.187308  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:09.397596  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:09.524714  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:09.567577  874848 pod_ready.go:93] pod "coredns-7c65d6cfc9-8m89r" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.567609  874848 pod_ready.go:82] duration metric: took 2.007088321s for pod "coredns-7c65d6cfc9-8m89r" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.567631  874848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.579438  874848 pod_ready.go:93] pod "etcd-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.579467  874848 pod_ready.go:82] duration metric: took 11.821727ms for pod "etcd-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.579484  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.585364  874848 pod_ready.go:93] pod "kube-apiserver-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.585385  874848 pod_ready.go:82] duration metric: took 5.89278ms for pod "kube-apiserver-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.585397  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.592284  874848 pod_ready.go:93] pod "kube-controller-manager-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.592307  874848 pod_ready.go:82] duration metric: took 6.902865ms for pod "kube-controller-manager-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.592321  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dqs2h" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.602562  874848 pod_ready.go:93] pod "kube-proxy-dqs2h" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.602588  874848 pod_ready.go:82] duration metric: took 10.259695ms for pod "kube-proxy-dqs2h" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.602600  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.681569  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:09.682934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:09.897633  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:09.965408  874848 pod_ready.go:93] pod "kube-scheduler-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.965433  874848 pod_ready.go:82] duration metric: took 362.810925ms for pod "kube-scheduler-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.965445  874848 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:10.026493  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:10.179971  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:10.182101  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:10.395509  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:10.526262  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:10.677859  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:10.679418  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:10.895078  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:11.025168  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:11.178262  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:11.178738  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:11.395621  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:11.524715  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:11.679184  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:11.679849  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:11.895381  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:11.971662  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:12.025550  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:12.181156  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:12.182835  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:12.394926  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:12.531168  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:12.679085  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:12.680606  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:12.895370  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:13.024873  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:13.177451  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:13.180418  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:13.395380  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:13.525613  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:13.680824  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:13.681895  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:13.895500  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:14.025845  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:14.179688  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:14.180935  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:14.394764  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:14.471939  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:14.524071  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:14.677438  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:14.679890  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:14.894272  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:15.025996  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:15.178028  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:15.180621  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:15.395485  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:15.523999  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:15.678417  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:15.678947  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:15.894558  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:16.025025  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:16.178905  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:16.180441  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:16.395296  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:16.473887  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:16.525561  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:16.678894  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:16.680480  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:16.896742  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:17.026245  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:17.182060  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:17.184538  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:17.395416  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:17.526035  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:17.681664  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:17.683402  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:17.895817  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:18.025795  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:18.181775  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:18.181945  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:18.395803  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:18.524488  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:18.677526  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:18.681893  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:18.894318  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:18.972325  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:19.024913  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:19.178927  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:19.181186  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:19.394794  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:19.524419  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:19.679432  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:19.680935  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:19.894744  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:20.024634  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:20.178560  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:20.179605  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:20.394255  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:20.524521  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:20.676974  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:20.684973  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:20.894834  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:21.025765  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:21.180351  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:21.181362  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:21.399049  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:21.475732  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:21.528175  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:21.680488  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:21.681930  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:21.894250  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:22.024817  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:22.177928  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:22.179422  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:22.394499  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:22.525146  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:22.678627  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:22.680308  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:22.895246  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:23.025863  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:23.177031  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:23.180339  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:23.396492  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:23.524470  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:23.678638  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:23.679382  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:23.895207  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:23.977712  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:24.029304  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:24.180362  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:24.183282  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:24.396641  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:24.529357  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:24.682468  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:24.684392  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:24.895280  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:25.025730  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:25.181572  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:25.183727  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:25.395405  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:25.524566  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:25.682333  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:25.683779  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:25.901528  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:25.980022  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:26.035812  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:26.184465  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:26.185902  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:26.399183  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:26.525590  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:26.684422  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:26.685595  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:26.895348  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:27.024667  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:27.178206  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:27.179539  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:27.395704  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:27.525158  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:27.679873  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:27.680479  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:27.897323  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:28.024852  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:28.179057  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:28.179565  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:28.394541  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:28.471727  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:28.524321  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:28.679544  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:28.680139  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:28.894850  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:29.024419  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:29.179889  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:29.180105  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:29.394579  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:29.525631  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:29.677384  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:29.679484  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:29.895140  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:30.039214  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:30.181482  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:30.191420  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:30.394455  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:30.472915  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:30.527806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:30.682674  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:30.686718  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:30.894383  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:31.025227  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:31.179957  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:31.181007  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:31.394945  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:31.524555  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:31.677768  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:31.680475  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:31.894695  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:32.025054  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:32.177978  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:32.179683  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:32.395007  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:32.524604  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:32.678592  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:32.679923  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:32.894812  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:32.973284  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:33.026118  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:33.180514  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:33.181997  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:33.394813  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:33.525731  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:33.680977  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:33.682906  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:33.895068  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:34.025233  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:34.179148  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:34.182917  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:34.395651  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:34.526147  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:34.684709  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:34.686035  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:34.895105  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:35.026583  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:35.183369  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:35.185238  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:35.394700  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:35.472721  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:35.525329  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:35.680355  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:35.681672  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:35.894138  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:36.025921  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:36.180639  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:36.181945  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:36.395181  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:36.525218  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:36.683856  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:36.688534  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:36.895037  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:37.026844  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:37.180732  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:37.181806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:37.395037  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:37.473242  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:37.525608  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:37.685407  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:37.688853  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:37.896431  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:38.026407  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:38.178835  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:38.180018  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:38.395082  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:38.524569  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:38.679237  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:38.680243  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:38.895023  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:39.024964  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:39.178384  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:39.180349  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:39.394794  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:39.524736  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:39.679043  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:39.680200  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:39.894788  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:39.972306  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:40.026022  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:40.178841  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:40.180531  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:40.396615  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:40.526316  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:40.679017  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:40.681396  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:40.895111  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:41.025310  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:41.180533  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:41.182040  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:41.394933  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:41.525086  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:41.678595  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:41.681805  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:41.894264  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:41.972495  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:42.027746  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:42.181385  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:42.183231  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:42.395231  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:42.525359  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:42.686622  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:42.687690  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:42.894396  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:43.026528  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:43.178411  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:43.180614  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:43.394781  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:43.526157  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:43.678825  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:43.680171  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:43.894755  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:43.974886  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:44.025244  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:44.180632  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:44.180874  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:44.394573  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:44.525492  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:44.679244  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:44.680033  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:44.894811  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:45.026849  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:45.180224  874848 kapi.go:107] duration metric: took 1m16.006705195s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 00:37:45.181335  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:45.394742  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:45.524737  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:45.678853  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:45.894538  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:46.025270  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:46.180187  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:46.420542  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:46.473873  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:46.527047  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:46.690644  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:46.895272  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:47.025191  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:47.180081  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:47.394774  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:47.524580  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:47.679051  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:47.895292  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:48.028400  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:48.181125  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:48.395824  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:48.526249  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:48.680363  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:48.894934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:48.973378  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:49.024934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:49.180049  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:49.394655  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:49.525575  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:49.678950  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:49.895006  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:50.027602  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:50.180508  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:50.395361  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:50.525087  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:50.679749  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:50.894761  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:50.974000  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:51.026760  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:51.180269  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:51.395059  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:51.525416  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:51.678919  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:51.895522  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:52.025040  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:52.179046  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:52.394934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:52.524525  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:52.679988  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:52.894201  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:53.024676  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:53.179453  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:53.394512  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:53.471888  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:53.523864  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:53.679106  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:53.896220  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:54.024917  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:54.180335  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:54.396636  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:54.526135  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:54.679867  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:54.912541  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:55.034071  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:55.179674  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:55.395396  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:55.473836  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:55.528494  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:55.680286  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:55.895006  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:56.027576  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:56.179160  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:56.398064  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:56.525392  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:56.680283  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:56.895453  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:57.024877  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:57.179495  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:57.395302  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:57.526310  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:57.678953  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:57.894929  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:57.978490  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:58.024590  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:58.182030  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:58.396784  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:58.526220  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:58.679161  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:58.894516  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:59.045878  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:59.183420  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:59.395337  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:59.525591  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:59.679994  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:59.896190  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:00.044921  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:00.276537  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:00.395763  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:00.472688  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:00.524316  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:00.679693  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:00.894767  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:01.024436  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:01.182184  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:01.396167  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:01.525666  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:01.679817  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:01.895495  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:02.026019  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:02.180745  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:02.395882  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:02.525241  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:02.679057  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:02.894767  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:02.975993  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:03.025801  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:03.180760  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:03.395339  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:03.526291  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:03.679567  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:03.899232  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:04.024210  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:04.179325  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:04.395706  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:04.524231  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:04.679479  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:04.894905  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:05.027840  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:05.182382  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:05.396023  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:05.473085  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:05.525806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:05.679191  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:05.896374  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:06.029480  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:06.178890  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:06.395107  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:06.525046  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:06.679017  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:06.894377  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:07.024327  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:07.178898  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:07.398532  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:07.473351  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:07.525318  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:07.681140  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:07.894913  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:08.027202  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:08.182979  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:08.395165  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:08.524187  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:08.694704  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:08.895184  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:09.030109  874848 kapi.go:107] duration metric: took 1m39.510623393s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 00:38:09.179285  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:09.395413  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:09.679463  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:09.895033  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:09.973120  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:10.179271  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:10.394612  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:10.679453  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:10.895174  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:11.178632  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:11.395338  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:11.679157  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:11.894373  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:12.179833  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:12.394349  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:12.471290  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:12.679175  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:12.895106  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:13.178590  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:13.396117  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:13.680434  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:13.894967  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:14.180563  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:14.396225  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:14.471905  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:14.679676  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:14.895516  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:15.179205  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:15.396426  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:15.679433  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:15.894213  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:16.179496  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:16.395328  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:16.476000  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:16.680237  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:16.896031  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:17.180049  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:17.395144  874848 kapi.go:107] duration metric: took 1m43.00425795s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 00:38:17.398321  874848 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-885748 cluster.
	I0914 00:38:17.400983  874848 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 00:38:17.403694  874848 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 00:38:17.679164  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:18.180368  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:18.483241  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:18.680385  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:19.184791  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:19.679772  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:20.180317  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:20.680340  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:20.972195  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:21.178702  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:21.679238  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:22.180491  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:22.681102  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:23.189543  874848 kapi.go:107] duration metric: took 1m54.01494077s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 00:38:23.191218  874848 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0914 00:38:23.192745  874848 addons.go:510] duration metric: took 2m0.619913914s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0914 00:38:23.475147  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:25.971975  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:28.477226  874848 pod_ready.go:93] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"True"
	I0914 00:38:28.477369  874848 pod_ready.go:82] duration metric: took 1m18.511914681s for pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace to be "Ready" ...
	I0914 00:38:28.477405  874848 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9nphx" in "kube-system" namespace to be "Ready" ...
	I0914 00:38:28.484642  874848 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9nphx" in "kube-system" namespace has status "Ready":"True"
	I0914 00:38:28.484732  874848 pod_ready.go:82] duration metric: took 7.280703ms for pod "nvidia-device-plugin-daemonset-9nphx" in "kube-system" namespace to be "Ready" ...
	I0914 00:38:28.484774  874848 pod_ready.go:39] duration metric: took 1m20.948183548s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:38:28.484841  874848 api_server.go:52] waiting for apiserver process to appear ...
	I0914 00:38:28.484919  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:38:28.485034  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:38:28.547414  874848 cri.go:89] found id: "48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:28.547485  874848 cri.go:89] found id: ""
	I0914 00:38:28.547506  874848 logs.go:276] 1 containers: [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4]
	I0914 00:38:28.547595  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.551987  874848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:38:28.552116  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:38:28.598910  874848 cri.go:89] found id: "f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:28.598933  874848 cri.go:89] found id: ""
	I0914 00:38:28.598941  874848 logs.go:276] 1 containers: [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7]
	I0914 00:38:28.599013  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.602400  874848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:38:28.602560  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:38:28.644171  874848 cri.go:89] found id: "80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:28.644192  874848 cri.go:89] found id: ""
	I0914 00:38:28.644201  874848 logs.go:276] 1 containers: [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9]
	I0914 00:38:28.644254  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.647972  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:38:28.648065  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:38:28.684644  874848 cri.go:89] found id: "d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:28.684667  874848 cri.go:89] found id: ""
	I0914 00:38:28.684675  874848 logs.go:276] 1 containers: [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289]
	I0914 00:38:28.684761  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.689599  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:38:28.689693  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:38:28.727470  874848 cri.go:89] found id: "a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:28.727491  874848 cri.go:89] found id: ""
	I0914 00:38:28.727499  874848 logs.go:276] 1 containers: [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887]
	I0914 00:38:28.727552  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.731365  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:38:28.731447  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:38:28.771519  874848 cri.go:89] found id: "f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:28.771541  874848 cri.go:89] found id: ""
	I0914 00:38:28.771550  874848 logs.go:276] 1 containers: [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da]
	I0914 00:38:28.771625  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.775121  874848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:38:28.775189  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:38:28.814792  874848 cri.go:89] found id: "56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:28.814816  874848 cri.go:89] found id: ""
	I0914 00:38:28.814824  874848 logs.go:276] 1 containers: [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d]
	I0914 00:38:28.814877  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.818284  874848 logs.go:123] Gathering logs for kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] ...
	I0914 00:38:28.818307  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:28.891320  874848 logs.go:123] Gathering logs for etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] ...
	I0914 00:38:28.891360  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:28.937126  874848 logs.go:123] Gathering logs for coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] ...
	I0914 00:38:28.937157  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:28.983373  874848 logs.go:123] Gathering logs for kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] ...
	I0914 00:38:28.983404  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:29.030599  874848 logs.go:123] Gathering logs for container status ...
	I0914 00:38:29.030626  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:38:29.088803  874848 logs.go:123] Gathering logs for kubelet ...
	I0914 00:38:29.088834  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 00:38:29.133183  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.432207    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.133455  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.432263    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.133676  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.439952    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.133911  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440000    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.134100  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.440181    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.134329  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440215    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.134541  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461184    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.134794  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461241    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.134992  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461318    1502 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.135240  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.135446  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.135709  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.135907  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.136142  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:29.192135  874848 logs.go:123] Gathering logs for dmesg ...
	I0914 00:38:29.192184  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:38:29.210094  874848 logs.go:123] Gathering logs for kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] ...
	I0914 00:38:29.210125  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:29.301224  874848 logs.go:123] Gathering logs for kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] ...
	I0914 00:38:29.301271  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:29.347119  874848 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:38:29.347147  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:38:29.444517  874848 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:38:29.444551  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:38:29.632311  874848 logs.go:123] Gathering logs for kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] ...
	I0914 00:38:29.632339  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:29.679537  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:29.679564  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 00:38:29.679625  874848 out.go:270] X Problems detected in kubelet:
	W0914 00:38:29.679636  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.679644  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.679651  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.679704  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.679712  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:29.679719  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:29.679725  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:38:39.681411  874848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:38:39.695346  874848 api_server.go:72] duration metric: took 2m17.122934524s to wait for apiserver process to appear ...
	I0914 00:38:39.695371  874848 api_server.go:88] waiting for apiserver healthz status ...
	I0914 00:38:39.695407  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:38:39.695463  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:38:39.743999  874848 cri.go:89] found id: "48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:39.744019  874848 cri.go:89] found id: ""
	I0914 00:38:39.744026  874848 logs.go:276] 1 containers: [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4]
	I0914 00:38:39.744108  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.748186  874848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:38:39.748271  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:38:39.786567  874848 cri.go:89] found id: "f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:39.786591  874848 cri.go:89] found id: ""
	I0914 00:38:39.786600  874848 logs.go:276] 1 containers: [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7]
	I0914 00:38:39.786673  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.790106  874848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:38:39.790172  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:38:39.830802  874848 cri.go:89] found id: "80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:39.830825  874848 cri.go:89] found id: ""
	I0914 00:38:39.830832  874848 logs.go:276] 1 containers: [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9]
	I0914 00:38:39.830891  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.834483  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:38:39.834578  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:38:39.873400  874848 cri.go:89] found id: "d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:39.873426  874848 cri.go:89] found id: ""
	I0914 00:38:39.873435  874848 logs.go:276] 1 containers: [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289]
	I0914 00:38:39.873493  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.877489  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:38:39.877568  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:38:39.915990  874848 cri.go:89] found id: "a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:39.916016  874848 cri.go:89] found id: ""
	I0914 00:38:39.916025  874848 logs.go:276] 1 containers: [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887]
	I0914 00:38:39.916112  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.919561  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:38:39.919637  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:38:39.957315  874848 cri.go:89] found id: "f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:39.957383  874848 cri.go:89] found id: ""
	I0914 00:38:39.957405  874848 logs.go:276] 1 containers: [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da]
	I0914 00:38:39.957474  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.960827  874848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:38:39.960894  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:38:40.000698  874848 cri.go:89] found id: "56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:40.000764  874848 cri.go:89] found id: ""
	I0914 00:38:40.000787  874848 logs.go:276] 1 containers: [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d]
	I0914 00:38:40.000868  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:40.009160  874848 logs.go:123] Gathering logs for kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] ...
	I0914 00:38:40.009238  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:40.063889  874848 logs.go:123] Gathering logs for kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] ...
	I0914 00:38:40.063916  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:40.140420  874848 logs.go:123] Gathering logs for container status ...
	I0914 00:38:40.140455  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:38:40.191420  874848 logs.go:123] Gathering logs for kubelet ...
	I0914 00:38:40.191454  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 00:38:40.233432  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.432207    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.233678  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.432263    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.233863  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.439952    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.234086  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440000    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.234255  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.440181    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.234464  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440215    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.234649  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461184    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.234875  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461241    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.235058  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461318    1502 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.235282  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.235469  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.235697  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.235870  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.236085  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:40.287929  874848 logs.go:123] Gathering logs for dmesg ...
	I0914 00:38:40.287960  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:38:40.304167  874848 logs.go:123] Gathering logs for coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] ...
	I0914 00:38:40.304197  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:40.351418  874848 logs.go:123] Gathering logs for kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] ...
	I0914 00:38:40.351450  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:40.405932  874848 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:38:40.405964  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:38:40.500837  874848 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:38:40.500877  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:38:40.647711  874848 logs.go:123] Gathering logs for kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] ...
	I0914 00:38:40.647741  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:40.699610  874848 logs.go:123] Gathering logs for etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] ...
	I0914 00:38:40.699643  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:40.758127  874848 logs.go:123] Gathering logs for kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] ...
	I0914 00:38:40.758155  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:40.808598  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:40.808623  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 00:38:40.808730  874848 out.go:270] X Problems detected in kubelet:
	W0914 00:38:40.808745  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.808772  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.808781  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.808787  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.808793  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:40.808806  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:40.808813  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:38:50.810748  874848 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 00:38:50.820324  874848 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 00:38:50.821343  874848 api_server.go:141] control plane version: v1.31.1
	I0914 00:38:50.821369  874848 api_server.go:131] duration metric: took 11.125990917s to wait for apiserver health ...
	I0914 00:38:50.821379  874848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:38:50.821403  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:38:50.821465  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:38:50.857789  874848 cri.go:89] found id: "48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:50.857812  874848 cri.go:89] found id: ""
	I0914 00:38:50.857820  874848 logs.go:276] 1 containers: [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4]
	I0914 00:38:50.857879  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:50.862216  874848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:38:50.862284  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:38:50.900268  874848 cri.go:89] found id: "f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:50.900291  874848 cri.go:89] found id: ""
	I0914 00:38:50.900299  874848 logs.go:276] 1 containers: [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7]
	I0914 00:38:50.900373  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:50.903842  874848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:38:50.903933  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:38:50.942518  874848 cri.go:89] found id: "80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:50.942541  874848 cri.go:89] found id: ""
	I0914 00:38:50.942549  874848 logs.go:276] 1 containers: [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9]
	I0914 00:38:50.942619  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:50.946096  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:38:50.946185  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:38:51.008164  874848 cri.go:89] found id: "d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:51.008212  874848 cri.go:89] found id: ""
	I0914 00:38:51.008227  874848 logs.go:276] 1 containers: [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289]
	I0914 00:38:51.008295  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.013303  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:38:51.013405  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:38:51.060066  874848 cri.go:89] found id: "a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:51.060149  874848 cri.go:89] found id: ""
	I0914 00:38:51.060172  874848 logs.go:276] 1 containers: [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887]
	I0914 00:38:51.060263  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.064118  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:38:51.064238  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:38:51.110490  874848 cri.go:89] found id: "f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:51.110528  874848 cri.go:89] found id: ""
	I0914 00:38:51.110537  874848 logs.go:276] 1 containers: [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da]
	I0914 00:38:51.110602  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.114745  874848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:38:51.114821  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:38:51.160743  874848 cri.go:89] found id: "56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:51.160763  874848 cri.go:89] found id: ""
	I0914 00:38:51.160771  874848 logs.go:276] 1 containers: [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d]
	I0914 00:38:51.160828  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.164783  874848 logs.go:123] Gathering logs for kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] ...
	I0914 00:38:51.164809  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:51.215849  874848 logs.go:123] Gathering logs for kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] ...
	I0914 00:38:51.215885  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:51.312761  874848 logs.go:123] Gathering logs for kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] ...
	I0914 00:38:51.312793  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:51.353667  874848 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:38:51.353697  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:38:51.448552  874848 logs.go:123] Gathering logs for container status ...
	I0914 00:38:51.448591  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:38:51.500391  874848 logs.go:123] Gathering logs for kubelet ...
	I0914 00:38:51.500420  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 00:38:51.527174  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.432207    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.527480  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.432263    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.527688  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.439952    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.527942  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440000    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.528142  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.440181    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.528385  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440215    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.528603  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461184    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.528866  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461241    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.529094  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461318    1502 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.529366  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.529580  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.529810  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.529984  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.530197  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:51.594195  874848 logs.go:123] Gathering logs for coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] ...
	I0914 00:38:51.594227  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:51.635725  874848 logs.go:123] Gathering logs for kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] ...
	I0914 00:38:51.635758  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:51.704376  874848 logs.go:123] Gathering logs for etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] ...
	I0914 00:38:51.704410  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:51.757616  874848 logs.go:123] Gathering logs for kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] ...
	I0914 00:38:51.757649  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:51.796955  874848 logs.go:123] Gathering logs for dmesg ...
	I0914 00:38:51.796986  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:38:51.815711  874848 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:38:51.815779  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:38:51.950032  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:51.950064  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 00:38:51.950122  874848 out.go:270] X Problems detected in kubelet:
	W0914 00:38:51.950135  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.950143  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.950157  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.950164  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.950177  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:51.950183  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:51.950190  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:39:01.963900  874848 system_pods.go:59] 18 kube-system pods found
	I0914 00:39:01.963965  874848 system_pods.go:61] "coredns-7c65d6cfc9-8m89r" [550228bd-69a1-4530-af98-0200cecdabf1] Running
	I0914 00:39:01.963975  874848 system_pods.go:61] "csi-hostpath-attacher-0" [cbc09b3c-e59c-4698-b6c7-f9d1746ab697] Running
	I0914 00:39:01.964017  874848 system_pods.go:61] "csi-hostpath-resizer-0" [1d0b01fe-048b-4b9e-82dd-5b408414180f] Running
	I0914 00:39:01.964026  874848 system_pods.go:61] "csi-hostpathplugin-mgx77" [456dedd2-11aa-43aa-8f21-e93340384161] Running
	I0914 00:39:01.964031  874848 system_pods.go:61] "etcd-addons-885748" [76fc0bec-b6e2-415d-8c2a-3bdb3f6bf113] Running
	I0914 00:39:01.964035  874848 system_pods.go:61] "kindnet-m55kx" [724646d8-f3df-4b7c-830a-ec84d16dc1c6] Running
	I0914 00:39:01.964040  874848 system_pods.go:61] "kube-apiserver-addons-885748" [c6447df2-c534-4e85-afc8-5da7d2435aa6] Running
	I0914 00:39:01.964045  874848 system_pods.go:61] "kube-controller-manager-addons-885748" [9727b4e8-1fa1-4175-b2ce-7bdd6ac0676c] Running
	I0914 00:39:01.964050  874848 system_pods.go:61] "kube-ingress-dns-minikube" [e6eb7e3a-203d-452a-b040-fbe431e6f08f] Running
	I0914 00:39:01.964054  874848 system_pods.go:61] "kube-proxy-dqs2h" [ad11d9fd-caaa-4026-86f8-aba3e5ac2834] Running
	I0914 00:39:01.964090  874848 system_pods.go:61] "kube-scheduler-addons-885748" [ae7fd70d-d206-474f-a967-53dc9227db19] Running
	I0914 00:39:01.964102  874848 system_pods.go:61] "metrics-server-84c5f94fbc-96xbg" [9c339307-23c2-46f3-af0b-9a4d12c82b32] Running
	I0914 00:39:01.964107  874848 system_pods.go:61] "nvidia-device-plugin-daemonset-9nphx" [8f3b2546-ef55-49b2-8f31-dd8f4ecdcf93] Running
	I0914 00:39:01.964113  874848 system_pods.go:61] "registry-66c9cd494c-bkhkl" [4d931f29-d87c-4bc8-8e58-88b441e56b0a] Running
	I0914 00:39:01.964118  874848 system_pods.go:61] "registry-proxy-fb2vb" [7d63ca1e-f5bf-47eb-84af-ebd01e9cd4b6] Running
	I0914 00:39:01.964127  874848 system_pods.go:61] "snapshot-controller-56fcc65765-8pfcj" [37872304-9181-40b4-8ebf-9958cdc3a7b0] Running
	I0914 00:39:01.964132  874848 system_pods.go:61] "snapshot-controller-56fcc65765-nwsdn" [bb956da0-8552-4d95-a92d-8a7311005caf] Running
	I0914 00:39:01.964136  874848 system_pods.go:61] "storage-provisioner" [c95fe42f-e257-4b52-ab42-54086f64f2e4] Running
	I0914 00:39:01.964143  874848 system_pods.go:74] duration metric: took 11.142756624s to wait for pod list to return data ...
	I0914 00:39:01.964165  874848 default_sa.go:34] waiting for default service account to be created ...
	I0914 00:39:01.967349  874848 default_sa.go:45] found service account: "default"
	I0914 00:39:01.967378  874848 default_sa.go:55] duration metric: took 3.206253ms for default service account to be created ...
	I0914 00:39:01.967389  874848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 00:39:01.979121  874848 system_pods.go:86] 18 kube-system pods found
	I0914 00:39:01.979159  874848 system_pods.go:89] "coredns-7c65d6cfc9-8m89r" [550228bd-69a1-4530-af98-0200cecdabf1] Running
	I0914 00:39:01.979168  874848 system_pods.go:89] "csi-hostpath-attacher-0" [cbc09b3c-e59c-4698-b6c7-f9d1746ab697] Running
	I0914 00:39:01.979173  874848 system_pods.go:89] "csi-hostpath-resizer-0" [1d0b01fe-048b-4b9e-82dd-5b408414180f] Running
	I0914 00:39:01.979178  874848 system_pods.go:89] "csi-hostpathplugin-mgx77" [456dedd2-11aa-43aa-8f21-e93340384161] Running
	I0914 00:39:01.979183  874848 system_pods.go:89] "etcd-addons-885748" [76fc0bec-b6e2-415d-8c2a-3bdb3f6bf113] Running
	I0914 00:39:01.979189  874848 system_pods.go:89] "kindnet-m55kx" [724646d8-f3df-4b7c-830a-ec84d16dc1c6] Running
	I0914 00:39:01.979194  874848 system_pods.go:89] "kube-apiserver-addons-885748" [c6447df2-c534-4e85-afc8-5da7d2435aa6] Running
	I0914 00:39:01.979199  874848 system_pods.go:89] "kube-controller-manager-addons-885748" [9727b4e8-1fa1-4175-b2ce-7bdd6ac0676c] Running
	I0914 00:39:01.979210  874848 system_pods.go:89] "kube-ingress-dns-minikube" [e6eb7e3a-203d-452a-b040-fbe431e6f08f] Running
	I0914 00:39:01.979215  874848 system_pods.go:89] "kube-proxy-dqs2h" [ad11d9fd-caaa-4026-86f8-aba3e5ac2834] Running
	I0914 00:39:01.979222  874848 system_pods.go:89] "kube-scheduler-addons-885748" [ae7fd70d-d206-474f-a967-53dc9227db19] Running
	I0914 00:39:01.979226  874848 system_pods.go:89] "metrics-server-84c5f94fbc-96xbg" [9c339307-23c2-46f3-af0b-9a4d12c82b32] Running
	I0914 00:39:01.979243  874848 system_pods.go:89] "nvidia-device-plugin-daemonset-9nphx" [8f3b2546-ef55-49b2-8f31-dd8f4ecdcf93] Running
	I0914 00:39:01.979273  874848 system_pods.go:89] "registry-66c9cd494c-bkhkl" [4d931f29-d87c-4bc8-8e58-88b441e56b0a] Running
	I0914 00:39:01.979280  874848 system_pods.go:89] "registry-proxy-fb2vb" [7d63ca1e-f5bf-47eb-84af-ebd01e9cd4b6] Running
	I0914 00:39:01.979284  874848 system_pods.go:89] "snapshot-controller-56fcc65765-8pfcj" [37872304-9181-40b4-8ebf-9958cdc3a7b0] Running
	I0914 00:39:01.979288  874848 system_pods.go:89] "snapshot-controller-56fcc65765-nwsdn" [bb956da0-8552-4d95-a92d-8a7311005caf] Running
	I0914 00:39:01.979292  874848 system_pods.go:89] "storage-provisioner" [c95fe42f-e257-4b52-ab42-54086f64f2e4] Running
	I0914 00:39:01.979298  874848 system_pods.go:126] duration metric: took 11.903645ms to wait for k8s-apps to be running ...
	I0914 00:39:01.979308  874848 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 00:39:01.979371  874848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:39:01.992649  874848 system_svc.go:56] duration metric: took 13.330968ms WaitForService to wait for kubelet
	I0914 00:39:01.992681  874848 kubeadm.go:582] duration metric: took 2m39.420274083s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:39:01.992702  874848 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:39:01.996886  874848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 00:39:01.996922  874848 node_conditions.go:123] node cpu capacity is 2
	I0914 00:39:01.996936  874848 node_conditions.go:105] duration metric: took 4.227243ms to run NodePressure ...
	I0914 00:39:01.996950  874848 start.go:241] waiting for startup goroutines ...
	I0914 00:39:01.996958  874848 start.go:246] waiting for cluster config update ...
	I0914 00:39:01.996976  874848 start.go:255] writing updated cluster config ...
	I0914 00:39:01.997319  874848 ssh_runner.go:195] Run: rm -f paused
	I0914 00:39:02.385531  874848 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 00:39:02.387155  874848 out.go:177] * Done! kubectl is now configured to use "addons-885748" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.161869799Z" level=info msg="Removed pod sandbox: df990be03239142f36b2104b3a687016d82e9d82fac260ef35d4ebe7def6b246" id=123503b8-7698-431a-9dc3-56a40298b05b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.162371382Z" level=info msg="Stopping pod sandbox: ddfa32f5cdd5f243ff5a1fd9b4458a09d752de4fdbf6170d3d6f8fbbe8c52f58" id=520755e3-fe07-4a28-b062-ab0a8b90ef22 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.162403439Z" level=info msg="Stopped pod sandbox (already stopped): ddfa32f5cdd5f243ff5a1fd9b4458a09d752de4fdbf6170d3d6f8fbbe8c52f58" id=520755e3-fe07-4a28-b062-ab0a8b90ef22 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.162829414Z" level=info msg="Removing pod sandbox: ddfa32f5cdd5f243ff5a1fd9b4458a09d752de4fdbf6170d3d6f8fbbe8c52f58" id=133592e5-844a-47e7-b192-9e912ebc473b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.173428296Z" level=info msg="Removed pod sandbox: ddfa32f5cdd5f243ff5a1fd9b4458a09d752de4fdbf6170d3d6f8fbbe8c52f58" id=133592e5-844a-47e7-b192-9e912ebc473b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.173900751Z" level=info msg="Stopping pod sandbox: b921a3f5f992a7bedf14b772801534f72714d4c37f776ffe0b7ef64b7be79ee4" id=b95ad502-2714-4dcf-9c12-ad500a64c443 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.173931036Z" level=info msg="Stopped pod sandbox (already stopped): b921a3f5f992a7bedf14b772801534f72714d4c37f776ffe0b7ef64b7be79ee4" id=b95ad502-2714-4dcf-9c12-ad500a64c443 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.174266199Z" level=info msg="Removing pod sandbox: b921a3f5f992a7bedf14b772801534f72714d4c37f776ffe0b7ef64b7be79ee4" id=64178f7f-07ec-4f53-a86f-e5368bcf6f28 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.183713126Z" level=info msg="Removed pod sandbox: b921a3f5f992a7bedf14b772801534f72714d4c37f776ffe0b7ef64b7be79ee4" id=64178f7f-07ec-4f53-a86f-e5368bcf6f28 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.184167982Z" level=info msg="Stopping pod sandbox: de3dd45daa441cc05b2bdb5accedc367ac5145364dfc335d19859b2fd60b7291" id=1d328c56-15fb-4e9b-9391-1f3b25fdd2eb name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.184203083Z" level=info msg="Stopped pod sandbox (already stopped): de3dd45daa441cc05b2bdb5accedc367ac5145364dfc335d19859b2fd60b7291" id=1d328c56-15fb-4e9b-9391-1f3b25fdd2eb name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.184518037Z" level=info msg="Removing pod sandbox: de3dd45daa441cc05b2bdb5accedc367ac5145364dfc335d19859b2fd60b7291" id=1485cb82-d45f-4335-8adf-f86d811ac010 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.204758015Z" level=info msg="Removed pod sandbox: de3dd45daa441cc05b2bdb5accedc367ac5145364dfc335d19859b2fd60b7291" id=1485cb82-d45f-4335-8adf-f86d811ac010 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.356067250Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3/POD" id=883e2a28-13e2-495d-b064-3caeb1ec5492 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.356129222Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.389832059Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3 Namespace:local-path-storage ID:4379a7d709301388e76eef1148811bd9e7b46f9b03ff88e952f0c443c3553f85 UID:cb76d33c-eace-4721-a23c-7ad33c0ba41b NetNS:/var/run/netns/843b4790-18be-400e-8fab-5dabe1fed242 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.389897772Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3 to CNI network \"kindnet\" (type=ptp)"
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.410185642Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3 Namespace:local-path-storage ID:4379a7d709301388e76eef1148811bd9e7b46f9b03ff88e952f0c443c3553f85 UID:cb76d33c-eace-4721-a23c-7ad33c0ba41b NetNS:/var/run/netns/843b4790-18be-400e-8fab-5dabe1fed242 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.410332330Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3 for CNI network kindnet (type=ptp)"
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.413199672Z" level=info msg="Ran pod sandbox 4379a7d709301388e76eef1148811bd9e7b46f9b03ff88e952f0c443c3553f85 with infra container: local-path-storage/helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3/POD" id=883e2a28-13e2-495d-b064-3caeb1ec5492 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.414383552Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=813f082a-60d6-4fe5-8e1c-f13d7518b91c name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.414622241Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=813f082a-60d6-4fe5-8e1c-f13d7518b91c name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.415495836Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=22012c93-c9bc-405c-bd0e-a1611fe6248e name=/runtime.v1.ImageService/PullImage
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.418160475Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Sep 14 00:48:18 addons-885748 crio[965]: time="2024-09-14 00:48:18.714299484Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2da68367dbc69       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            15 seconds ago      Exited              gadget                    7                   2e83b32cdaef1       gadget-9rb75
	11df4840b3ccd       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             9 minutes ago       Running             controller                0                   08cadfc75ed5b       ingress-nginx-controller-bc57996ff-h95vr
	fc0328f66b9e0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 10 minutes ago      Running             gcp-auth                  0                   d7b70729e47d5       gcp-auth-89d5ffd79-frj5t
	826b0a5e7c152       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   10 minutes ago      Exited              patch                     0                   c6e0bd643d42b       ingress-nginx-admission-patch-tdznv
	8333c07f8b12c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   10 minutes ago      Exited              create                    0                   20383ba456212       ingress-nginx-admission-create-f82md
	3256fc0da8a44       gcr.io/cloud-spanner-emulator/emulator@sha256:41ec188288c7943f488600462b2b74002814e52439be82d15de33c3ee4898a58               10 minutes ago      Running             cloud-spanner-emulator    0                   25a3b2eaa1b39       cloud-spanner-emulator-769b77f747-qnlnm
	8091d19cac440       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             10 minutes ago      Running             local-path-provisioner    0                   1220f396bbf80       local-path-provisioner-86d989889c-dlghs
	7d56766635b73       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        10 minutes ago      Running             metrics-server            0                   bb338c2f32bcd       metrics-server-84c5f94fbc-96xbg
	4c38221755a81       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             10 minutes ago      Running             minikube-ingress-dns      0                   03389f2005252       kube-ingress-dns-minikube
	80e8332c931e9       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             11 minutes ago      Running             coredns                   0                   249d9842b4544       coredns-7c65d6cfc9-8m89r
	ebb2e7bdbbfd4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             11 minutes ago      Running             storage-provisioner       0                   105d379cff026       storage-provisioner
	56f7319a8a8d6       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             11 minutes ago      Running             kindnet-cni               0                   0c72d454012fd       kindnet-m55kx
	a47b8e8869ee8       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             11 minutes ago      Running             kube-proxy                0                   ffefb18074c57       kube-proxy-dqs2h
	48d812ac2652a       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             12 minutes ago      Running             kube-apiserver            0                   828ea1cf2ba92       kube-apiserver-addons-885748
	f3056a13deffd       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             12 minutes ago      Running             kube-controller-manager   0                   cc2cb3c49ab23       kube-controller-manager-addons-885748
	d793e5939094c       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             12 minutes ago      Running             kube-scheduler            0                   8aac50a11aa1f       kube-scheduler-addons-885748
	f8b9a437608b9       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             12 minutes ago      Running             etcd                      0                   fecaa719a39f6       etcd-addons-885748
	
	
	==> coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] <==
	[INFO] 10.244.0.12:33235 - 38053 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000139427s
	[INFO] 10.244.0.12:39174 - 20223 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002168352s
	[INFO] 10.244.0.12:39174 - 34553 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001903267s
	[INFO] 10.244.0.12:55989 - 9515 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000126611s
	[INFO] 10.244.0.12:55989 - 28949 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008036s
	[INFO] 10.244.0.12:44725 - 25596 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000196631s
	[INFO] 10.244.0.12:44725 - 58609 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000336263s
	[INFO] 10.244.0.12:37024 - 61418 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000840966s
	[INFO] 10.244.0.12:37024 - 33000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112703s
	[INFO] 10.244.0.12:43586 - 62400 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096252s
	[INFO] 10.244.0.12:43586 - 21956 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000270361s
	[INFO] 10.244.0.12:39958 - 65451 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001701713s
	[INFO] 10.244.0.12:39958 - 44969 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001315145s
	[INFO] 10.244.0.12:45882 - 11582 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000106755s
	[INFO] 10.244.0.12:45882 - 57120 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000056901s
	[INFO] 10.244.0.20:40467 - 22508 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0009463s
	[INFO] 10.244.0.20:47828 - 50659 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016816s
	[INFO] 10.244.0.20:56008 - 60050 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000270443s
	[INFO] 10.244.0.20:57451 - 45764 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000274734s
	[INFO] 10.244.0.20:45104 - 4965 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000161744s
	[INFO] 10.244.0.20:37823 - 38164 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000285663s
	[INFO] 10.244.0.20:38730 - 54617 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003590072s
	[INFO] 10.244.0.20:48720 - 43288 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003891685s
	[INFO] 10.244.0.20:50211 - 8144 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002567145s
	[INFO] 10.244.0.20:33394 - 12183 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002678577s
	
	
	==> describe nodes <==
	Name:               addons-885748
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-885748
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=addons-885748
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_36_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-885748
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:36:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-885748
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:48:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:48:02 +0000   Sat, 14 Sep 2024 00:36:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:48:02 +0000   Sat, 14 Sep 2024 00:36:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:48:02 +0000   Sat, 14 Sep 2024 00:36:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:48:02 +0000   Sat, 14 Sep 2024 00:37:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-885748
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 4359fb52d09b48a99b9422f7ed1aab10
	  System UUID:                97520139-af6f-4519-ad5d-f1e74ef171eb
	  Boot ID:                    fb6d1488-4ff6-49a9-b7dc-0ab0c636005f
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-qnlnm                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-9rb75                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-frj5t                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-h95vr                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-8m89r                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-addons-885748                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-m55kx                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-885748                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-885748                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-dqs2h                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-885748                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-96xbg                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-86d989889c-dlghs                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-885748 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-885748 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-885748 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node addons-885748 event: Registered Node addons-885748 in Controller
	  Normal   NodeReady                11m   kubelet          Node addons-885748 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] <==
	{"level":"info","ts":"2024-09-14T00:36:11.809298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-14T00:36:11.809443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T00:36:11.809526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-14T00:36:11.809593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.809640Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.809693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.809738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.812081Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-885748 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:36:11.812295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:36:11.813737Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:11.813914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:36:11.816988Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:36:11.817279Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:36:11.817309Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:36:11.817872Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:36:11.818699Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:36:11.819114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-14T00:36:11.819222Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:11.821360Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:11.821444Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:23.269674Z","caller":"traceutil/trace.go:171","msg":"trace[1517059370] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"215.568054ms","start":"2024-09-14T00:36:23.054082Z","end":"2024-09-14T00:36:23.269650Z","steps":["trace[1517059370] 'process raft request'  (duration: 116.127873ms)","trace[1517059370] 'compare'  (duration: 99.327166ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T00:38:00.269469Z","caller":"traceutil/trace.go:171","msg":"trace[1121860251] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"101.073361ms","start":"2024-09-14T00:38:00.168377Z","end":"2024-09-14T00:38:00.269450Z","steps":["trace[1121860251] 'process raft request'  (duration: 92.406917ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:46:12.426019Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-09-14T00:46:12.459407Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"32.859233ms","hash":3731172354,"current-db-size-bytes":6463488,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3293184,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-14T00:46:12.459464Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3731172354,"revision":1514,"compact-revision":-1}
	
	
	==> gcp-auth [fc0328f66b9e0b6021b961c2ff50a7c98d37c2056b93b1910e5cac7120024106] <==
	2024/09/14 00:38:16 GCP Auth Webhook started!
	2024/09/14 00:39:02 Ready to marshal response ...
	2024/09/14 00:39:02 Ready to write response ...
	2024/09/14 00:39:02 Ready to marshal response ...
	2024/09/14 00:39:02 Ready to write response ...
	2024/09/14 00:39:02 Ready to marshal response ...
	2024/09/14 00:39:02 Ready to write response ...
	2024/09/14 00:47:16 Ready to marshal response ...
	2024/09/14 00:47:16 Ready to write response ...
	2024/09/14 00:47:20 Ready to marshal response ...
	2024/09/14 00:47:20 Ready to write response ...
	2024/09/14 00:47:42 Ready to marshal response ...
	2024/09/14 00:47:42 Ready to write response ...
	2024/09/14 00:48:17 Ready to marshal response ...
	2024/09/14 00:48:17 Ready to write response ...
	2024/09/14 00:48:18 Ready to marshal response ...
	2024/09/14 00:48:18 Ready to write response ...
	
	
	==> kernel <==
	 00:48:19 up  4:30,  0 users,  load average: 0.56, 0.82, 1.99
	Linux addons-885748 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] <==
	I0914 00:46:16.817399       1 main.go:299] handling current node
	I0914 00:46:26.814564       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:46:26.814600       1 main.go:299] handling current node
	I0914 00:46:36.818775       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:46:36.818815       1 main.go:299] handling current node
	I0914 00:46:46.822372       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:46:46.822407       1 main.go:299] handling current node
	I0914 00:46:56.814588       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:46:56.814626       1 main.go:299] handling current node
	I0914 00:47:06.814664       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:47:06.814703       1 main.go:299] handling current node
	I0914 00:47:16.815150       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:47:16.815180       1 main.go:299] handling current node
	I0914 00:47:26.815536       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:47:26.815568       1 main.go:299] handling current node
	I0914 00:47:36.817324       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:47:36.817357       1 main.go:299] handling current node
	I0914 00:47:46.814576       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:47:46.814696       1 main.go:299] handling current node
	I0914 00:47:56.815631       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:47:56.815693       1 main.go:299] handling current node
	I0914 00:48:06.814556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:48:06.814590       1 main.go:299] handling current node
	I0914 00:48:16.815399       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:48:16.815455       1 main.go:299] handling current node
	
	
	==> kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] <==
	E0914 00:37:29.368630       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0914 00:37:29.369796       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0914 00:38:28.079451       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.50.17:443: connect: connection refused" logger="UnhandledError"
	W0914 00:38:28.079625       1 handler_proxy.go:99] no RequestInfo found in the context
	E0914 00:38:28.079688       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0914 00:38:28.081595       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.50.17:443: connect: connection refused" logger="UnhandledError"
	E0914 00:38:28.087158       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.50.17:443: connect: connection refused" logger="UnhandledError"
	I0914 00:38:28.202850       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 00:47:30.583195       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0914 00:47:58.487976       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.488113       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.515016       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.515142       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.531465       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.531532       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.560654       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.560914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.650738       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.651626       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0914 00:47:59.605910       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0914 00:47:59.652010       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 00:47:59.748780       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] <==
	E0914 00:48:00.664577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:48:00.841121       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:48:00.841165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:48:01.234709       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:48:01.234749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 00:48:02.244725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-885748"
	W0914 00:48:03.460964       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:48:03.461021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:48:03.480800       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:48:03.480923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:48:03.705459       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:48:03.705607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 00:48:05.482487       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="5.226µs"
	W0914 00:48:07.291465       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:48:07.291509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:48:09.193044       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:48:09.193092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:48:09.332577       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:48:09.332620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:48:14.035878       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:48:14.035918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 00:48:15.589177       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0914 00:48:17.094222       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.694µs"
	W0914 00:48:18.727824       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:48:18.727870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] <==
	I0914 00:36:28.440930       1 server_linux.go:66] "Using iptables proxy"
	I0914 00:36:28.793093       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0914 00:36:28.800749       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:36:28.881787       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 00:36:28.881853       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:36:28.885486       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:36:28.886040       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:36:28.886064       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:36:28.895020       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:36:28.895563       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:36:28.895926       1 config.go:199] "Starting service config controller"
	I0914 00:36:28.895990       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:36:28.896039       1 config.go:328] "Starting node config controller"
	I0914 00:36:28.896085       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:36:28.997049       1 shared_informer.go:320] Caches are synced for node config
	I0914 00:36:28.997094       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:36:28.997136       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] <==
	W0914 00:36:15.114600       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 00:36:15.115262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:15.991953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 00:36:15.991999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.008831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 00:36:16.008904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.013935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 00:36:16.013978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.023529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 00:36:16.023578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.064172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 00:36:16.064214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.099965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 00:36:16.100011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.138383       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 00:36:16.138445       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 00:36:16.186067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 00:36:16.186184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.187287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 00:36:16.187385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.203065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 00:36:16.203112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.215830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 00:36:16.215921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0914 00:36:18.915639       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 00:48:18 addons-885748 kubelet[1502]: E0914 00:48:18.054797    1502 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d931f29-d87c-4bc8-8e58-88b441e56b0a" containerName="registry"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: E0914 00:48:18.054803    1502 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cbc09b3c-e59c-4698-b6c7-f9d1746ab697" containerName="csi-attacher"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: E0914 00:48:18.054810    1502 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af09cb1a-668a-4764-b6a3-109a5be57346" containerName="yakd"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: E0914 00:48:18.054816    1502 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="456dedd2-11aa-43aa-8f21-e93340384161" containerName="node-driver-registrar"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054856    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="37872304-9181-40b4-8ebf-9958cdc3a7b0" containerName="volume-snapshot-controller"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054866    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="af09cb1a-668a-4764-b6a3-109a5be57346" containerName="yakd"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054874    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="456dedd2-11aa-43aa-8f21-e93340384161" containerName="liveness-probe"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054882    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbbe26c-76e6-4a0d-b9de-0f03bbbe870a" containerName="task-pv-container"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054889    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="456dedd2-11aa-43aa-8f21-e93340384161" containerName="hostpath"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054897    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d931f29-d87c-4bc8-8e58-88b441e56b0a" containerName="registry"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054903    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3b2546-ef55-49b2-8f31-dd8f4ecdcf93" containerName="nvidia-device-plugin-ctr"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054911    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb956da0-8552-4d95-a92d-8a7311005caf" containerName="volume-snapshot-controller"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054918    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc09b3c-e59c-4698-b6c7-f9d1746ab697" containerName="csi-attacher"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054924    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d63ca1e-f5bf-47eb-84af-ebd01e9cd4b6" containerName="registry-proxy"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054929    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="456dedd2-11aa-43aa-8f21-e93340384161" containerName="csi-snapshotter"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054935    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="456dedd2-11aa-43aa-8f21-e93340384161" containerName="csi-external-health-monitor-controller"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054942    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="456dedd2-11aa-43aa-8f21-e93340384161" containerName="node-driver-registrar"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054950    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0b01fe-048b-4b9e-82dd-5b408414180f" containerName="csi-resizer"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.054956    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="456dedd2-11aa-43aa-8f21-e93340384161" containerName="csi-provisioner"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.104693    1502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlr5w\" (UniqueName: \"kubernetes.io/projected/cb76d33c-eace-4721-a23c-7ad33c0ba41b-kube-api-access-vlr5w\") pod \"helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3\" (UID: \"cb76d33c-eace-4721-a23c-7ad33c0ba41b\") " pod="local-path-storage/helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.104739    1502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/cb76d33c-eace-4721-a23c-7ad33c0ba41b-script\") pod \"helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3\" (UID: \"cb76d33c-eace-4721-a23c-7ad33c0ba41b\") " pod="local-path-storage/helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.104763    1502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/cb76d33c-eace-4721-a23c-7ad33c0ba41b-data\") pod \"helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3\" (UID: \"cb76d33c-eace-4721-a23c-7ad33c0ba41b\") " pod="local-path-storage/helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: I0914 00:48:18.104781    1502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cb76d33c-eace-4721-a23c-7ad33c0ba41b-gcp-creds\") pod \"helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3\" (UID: \"cb76d33c-eace-4721-a23c-7ad33c0ba41b\") " pod="local-path-storage/helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: E0914 00:48:18.114688    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726274898114451661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:488968,},InodesUsed:&UInt64Value{Value:189,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:48:18 addons-885748 kubelet[1502]: E0914 00:48:18.114720    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726274898114451661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:488968,},InodesUsed:&UInt64Value{Value:189,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [ebb2e7bdbbfd4e15a1df8147f8ab8e288ada7a8b4fb1482db8fd01effcb11eef] <==
	I0914 00:37:07.958850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 00:37:07.973989       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 00:37:07.974042       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 00:37:07.989411       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 00:37:07.989599       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-885748_49a32771-25bb-4138-a77d-2ab9ae6fe758!
	I0914 00:37:07.993894       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1f3463c-ac9e-45b9-aadc-bdd81184edd4", APIVersion:"v1", ResourceVersion:"870", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-885748_49a32771-25bb-4138-a77d-2ab9ae6fe758 became leader
	I0914 00:37:08.090873       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-885748_49a32771-25bb-4138-a77d-2ab9ae6fe758!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-885748 -n addons-885748
helpers_test.go:261: (dbg) Run:  kubectl --context addons-885748 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path ingress-nginx-admission-create-f82md ingress-nginx-admission-patch-tdznv helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-885748 describe pod busybox test-local-path ingress-nginx-admission-create-f82md ingress-nginx-admission-patch-tdznv helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-885748 describe pod busybox test-local-path ingress-nginx-admission-create-f82md ingress-nginx-admission-patch-tdznv helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3: exit status 1 (100.988286ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-885748/192.168.49.2
	Start Time:       Sat, 14 Sep 2024 00:39:02 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9c6mj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9c6mj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-885748
	  Normal   Pulling    7m47s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m47s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m47s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m35s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gmpv (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-5gmpv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-f82md" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tdznv" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-885748 describe pod busybox test-local-path ingress-nginx-admission-create-f82md ingress-nginx-admission-patch-tdznv helper-pod-create-pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-885748 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-885748 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-885748 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f132bdda-d70d-447c-8ab2-3a21cf948dac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f132bdda-d70d-447c-8ab2-3a21cf948dac] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003973382s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-885748 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.717107561s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-885748 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-885748 addons disable ingress-dns --alsologtostderr -v=1: (1.726916916s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-885748 addons disable ingress --alsologtostderr -v=1: (7.741192607s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-885748
helpers_test.go:235: (dbg) docker inspect addons-885748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a",
	        "Created": "2024-09-14T00:35:51.693021132Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 875338,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-14T00:35:51.852610858Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fe3365929e6ce54b4c06f0bc3d1500dff08f535844ef4978f2c45cd67c542134",
	        "ResolvConfPath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/hostname",
	        "HostsPath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/hosts",
	        "LogPath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a-json.log",
	        "Name": "/addons-885748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-885748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-885748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee-init/diff:/var/lib/docker/overlay2/75b2121147f32424fffc5e50d2609c96cf2fdc411273d8660afbb09b8a3ad07a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-885748",
	                "Source": "/var/lib/docker/volumes/addons-885748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-885748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-885748",
	                "name.minikube.sigs.k8s.io": "addons-885748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1a2274d2fe074b454d8fc13c1575d8f017a8d3113ed94af95faf9d1d2583971",
	            "SandboxKey": "/var/run/docker/netns/b1a2274d2fe0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33564"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33565"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33568"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33566"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33567"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-885748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c1a0d21fd124d60633c329f0674dc6666a0292fe6f6b1be172c6bb2b7fa6a718",
	                    "EndpointID": "ce9472c43f8e5b4bcc4e1fe669f69274e4050166515b932738a2ad8472c5184d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-885748",
	                        "16a9106e2bf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-885748 -n addons-885748
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-885748 logs -n 25: (1.432612583s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-116392                                                                     | download-only-116392   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| delete  | -p download-only-396021                                                                     | download-only-396021   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| start   | --download-only -p                                                                          | download-docker-830102 | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | download-docker-830102                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-830102                                                                   | download-docker-830102 | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-918324   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | binary-mirror-918324                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44679                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-918324                                                                     | binary-mirror-918324   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| addons  | disable dashboard -p                                                                        | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | addons-885748                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | addons-885748                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-885748 --wait=true                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-885748 addons                                                                        | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:47 UTC | 14 Sep 24 00:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-885748 addons                                                                        | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:47 UTC | 14 Sep 24 00:47 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-885748 ip                                                                            | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | -p addons-885748                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-885748 ssh cat                                                                       | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | /opt/local-path-provisioner/pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | addons-885748                                                                               |                        |         |         |                     |                     |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | -p addons-885748                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | addons-885748                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-885748 ssh curl -s                                                                   | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:49 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-885748 ip                                                                            | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:51 UTC | 14 Sep 24 00:51 UTC |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:51 UTC | 14 Sep 24 00:51 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:51 UTC | 14 Sep 24 00:51 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:35:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:35:27.648597  874848 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:35:27.648788  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:27.648825  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:35:27.648839  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:27.649116  874848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 00:35:27.649628  874848 out.go:352] Setting JSON to false
	I0914 00:35:27.650620  874848 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15472,"bootTime":1726258656,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0914 00:35:27.650703  874848 start.go:139] virtualization:  
	I0914 00:35:27.652331  874848 out.go:177] * [addons-885748] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 00:35:27.654538  874848 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:35:27.654641  874848 notify.go:220] Checking for updates...
	I0914 00:35:27.657216  874848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:35:27.658569  874848 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 00:35:27.659690  874848 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	I0914 00:35:27.661085  874848 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 00:35:27.662124  874848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:35:27.663629  874848 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:35:27.685055  874848 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:35:27.685194  874848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:27.743708  874848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:35:27.734595728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:27.743818  874848 docker.go:318] overlay module found
	I0914 00:35:27.746292  874848 out.go:177] * Using the docker driver based on user configuration
	I0914 00:35:27.747376  874848 start.go:297] selected driver: docker
	I0914 00:35:27.747391  874848 start.go:901] validating driver "docker" against <nil>
	I0914 00:35:27.747405  874848 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:35:27.748035  874848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:27.802291  874848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:35:27.792988752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:27.802504  874848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:35:27.802746  874848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:35:27.803994  874848 out.go:177] * Using Docker driver with root privileges
	I0914 00:35:27.804986  874848 cni.go:84] Creating CNI manager for ""
	I0914 00:35:27.805046  874848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:35:27.805057  874848 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 00:35:27.805150  874848 start.go:340] cluster config:
	{Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:35:27.807346  874848 out.go:177] * Starting "addons-885748" primary control-plane node in "addons-885748" cluster
	I0914 00:35:27.808476  874848 cache.go:121] Beginning downloading kic base image for docker with crio
	I0914 00:35:27.809606  874848 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0914 00:35:27.810871  874848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:27.810920  874848 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0914 00:35:27.810932  874848 cache.go:56] Caching tarball of preloaded images
	I0914 00:35:27.810960  874848 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 00:35:27.811021  874848 preload.go:172] Found /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 00:35:27.811031  874848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:35:27.811392  874848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/config.json ...
	I0914 00:35:27.811450  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/config.json: {Name:mk574a8eb9ef8f9e3b261644b0ca0e71c6fc48e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:27.826453  874848 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 00:35:27.826558  874848 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 00:35:27.826581  874848 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 00:35:27.826586  874848 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 00:35:27.826598  874848 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 00:35:27.826604  874848 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0914 00:35:44.803607  874848 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0914 00:35:44.803643  874848 cache.go:194] Successfully downloaded all kic artifacts
	I0914 00:35:44.803673  874848 start.go:360] acquireMachinesLock for addons-885748: {Name:mk9ddda16eaf26a40c295d659f1e42acd6143125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:35:44.803799  874848 start.go:364] duration metric: took 104.539µs to acquireMachinesLock for "addons-885748"
	I0914 00:35:44.803830  874848 start.go:93] Provisioning new machine with config: &{Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:35:44.803926  874848 start.go:125] createHost starting for "" (driver="docker")
	I0914 00:35:44.805508  874848 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0914 00:35:44.805767  874848 start.go:159] libmachine.API.Create for "addons-885748" (driver="docker")
	I0914 00:35:44.805803  874848 client.go:168] LocalClient.Create starting
	I0914 00:35:44.805931  874848 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem
	I0914 00:35:45.234194  874848 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem
	I0914 00:35:45.623675  874848 cli_runner.go:164] Run: docker network inspect addons-885748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 00:35:45.638888  874848 cli_runner.go:211] docker network inspect addons-885748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 00:35:45.638974  874848 network_create.go:284] running [docker network inspect addons-885748] to gather additional debugging logs...
	I0914 00:35:45.638996  874848 cli_runner.go:164] Run: docker network inspect addons-885748
	W0914 00:35:45.653957  874848 cli_runner.go:211] docker network inspect addons-885748 returned with exit code 1
	I0914 00:35:45.653988  874848 network_create.go:287] error running [docker network inspect addons-885748]: docker network inspect addons-885748: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-885748 not found
	I0914 00:35:45.654007  874848 network_create.go:289] output of [docker network inspect addons-885748]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-885748 not found
	
	** /stderr **
	I0914 00:35:45.654106  874848 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 00:35:45.672611  874848 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400048fee0}
	I0914 00:35:45.672659  874848 network_create.go:124] attempt to create docker network addons-885748 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 00:35:45.672715  874848 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-885748 addons-885748
	I0914 00:35:45.738461  874848 network_create.go:108] docker network addons-885748 192.168.49.0/24 created
	I0914 00:35:45.738494  874848 kic.go:121] calculated static IP "192.168.49.2" for the "addons-885748" container
	I0914 00:35:45.738570  874848 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 00:35:45.753096  874848 cli_runner.go:164] Run: docker volume create addons-885748 --label name.minikube.sigs.k8s.io=addons-885748 --label created_by.minikube.sigs.k8s.io=true
	I0914 00:35:45.768446  874848 oci.go:103] Successfully created a docker volume addons-885748
	I0914 00:35:45.768544  874848 cli_runner.go:164] Run: docker run --rm --name addons-885748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885748 --entrypoint /usr/bin/test -v addons-885748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib
	I0914 00:35:47.532910  874848 cli_runner.go:217] Completed: docker run --rm --name addons-885748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885748 --entrypoint /usr/bin/test -v addons-885748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib: (1.764328482s)
	I0914 00:35:47.532939  874848 oci.go:107] Successfully prepared a docker volume addons-885748
	I0914 00:35:47.532965  874848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:47.532986  874848 kic.go:194] Starting extracting preloaded images to volume ...
	I0914 00:35:47.533050  874848 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-885748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 00:35:51.627808  874848 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-885748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir: (4.094718795s)
	I0914 00:35:51.627842  874848 kic.go:203] duration metric: took 4.094852633s to extract preloaded images to volume ...
	W0914 00:35:51.627991  874848 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 00:35:51.628114  874848 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 00:35:51.679472  874848 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-885748 --name addons-885748 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885748 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-885748 --network addons-885748 --ip 192.168.49.2 --volume addons-885748:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243
	I0914 00:35:52.026413  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Running}}
	I0914 00:35:52.054130  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:35:52.081150  874848 cli_runner.go:164] Run: docker exec addons-885748 stat /var/lib/dpkg/alternatives/iptables
	I0914 00:35:52.151645  874848 oci.go:144] the created container "addons-885748" has a running status.
	I0914 00:35:52.151674  874848 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa...
	I0914 00:35:52.411723  874848 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 00:35:52.437353  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:35:52.459127  874848 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 00:35:52.459149  874848 kic_runner.go:114] Args: [docker exec --privileged addons-885748 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 00:35:52.535444  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:35:52.561504  874848 machine.go:93] provisionDockerMachine start ...
	I0914 00:35:52.561596  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:52.592426  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:52.592702  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:52.592718  874848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 00:35:52.593577  874848 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48640->127.0.0.1:33564: read: connection reset by peer
	I0914 00:35:55.712678  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-885748
	
	I0914 00:35:55.712704  874848 ubuntu.go:169] provisioning hostname "addons-885748"
	I0914 00:35:55.712793  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:55.730083  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:55.730330  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:55.730355  874848 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-885748 && echo "addons-885748" | sudo tee /etc/hostname
	I0914 00:35:55.863937  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-885748
	
	I0914 00:35:55.864025  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:55.884479  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:55.884728  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:55.884753  874848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-885748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-885748/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-885748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:35:56.006206  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:35:56.006299  874848 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-868698/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-868698/.minikube}
	I0914 00:35:56.006369  874848 ubuntu.go:177] setting up certificates
	I0914 00:35:56.006397  874848 provision.go:84] configureAuth start
	I0914 00:35:56.006497  874848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885748
	I0914 00:35:56.025656  874848 provision.go:143] copyHostCerts
	I0914 00:35:56.025744  874848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem (1078 bytes)
	I0914 00:35:56.025874  874848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem (1123 bytes)
	I0914 00:35:56.025946  874848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem (1679 bytes)
	I0914 00:35:56.026001  874848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem org=jenkins.addons-885748 san=[127.0.0.1 192.168.49.2 addons-885748 localhost minikube]
	I0914 00:35:56.397039  874848 provision.go:177] copyRemoteCerts
	I0914 00:35:56.397111  874848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:35:56.397152  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.413576  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:56.502071  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:35:56.525597  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:35:56.549087  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 00:35:56.572443  874848 provision.go:87] duration metric: took 566.020273ms to configureAuth
	I0914 00:35:56.572469  874848 ubuntu.go:193] setting minikube options for container-runtime
	I0914 00:35:56.572641  874848 config.go:182] Loaded profile config "addons-885748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:35:56.572750  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.589020  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:56.589468  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:56.589494  874848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:35:56.813689  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:35:56.813714  874848 machine.go:96] duration metric: took 4.252187622s to provisionDockerMachine
	I0914 00:35:56.813724  874848 client.go:171] duration metric: took 12.007912s to LocalClient.Create
	I0914 00:35:56.813737  874848 start.go:167] duration metric: took 12.007978992s to libmachine.API.Create "addons-885748"
	I0914 00:35:56.813745  874848 start.go:293] postStartSetup for "addons-885748" (driver="docker")
	I0914 00:35:56.813756  874848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:35:56.813824  874848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:35:56.813884  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.830802  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:56.918469  874848 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:35:56.921566  874848 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 00:35:56.921600  874848 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 00:35:56.921611  874848 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 00:35:56.921619  874848 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 00:35:56.921629  874848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/addons for local assets ...
	I0914 00:35:56.921700  874848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/files for local assets ...
	I0914 00:35:56.921730  874848 start.go:296] duration metric: took 107.979103ms for postStartSetup
	I0914 00:35:56.922050  874848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885748
	I0914 00:35:56.937996  874848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/config.json ...
	I0914 00:35:56.938300  874848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:35:56.938349  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.957478  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:57.042229  874848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 00:35:57.047056  874848 start.go:128] duration metric: took 12.243112242s to createHost
	I0914 00:35:57.047078  874848 start.go:83] releasing machines lock for "addons-885748", held for 12.243266454s
	I0914 00:35:57.047155  874848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885748
	I0914 00:35:57.063313  874848 ssh_runner.go:195] Run: cat /version.json
	I0914 00:35:57.063378  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:57.063655  874848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:35:57.063724  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:57.084371  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:57.094261  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:57.308581  874848 ssh_runner.go:195] Run: systemctl --version
	I0914 00:35:57.312939  874848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:35:57.451620  874848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 00:35:57.455973  874848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:35:57.477002  874848 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 00:35:57.477132  874848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:35:57.511110  874848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 00:35:57.511137  874848 start.go:495] detecting cgroup driver to use...
	I0914 00:35:57.511169  874848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 00:35:57.511217  874848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:35:57.526481  874848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:35:57.538293  874848 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:35:57.538364  874848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:35:57.552686  874848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:35:57.568072  874848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:35:57.662991  874848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:35:57.755248  874848 docker.go:233] disabling docker service ...
	I0914 00:35:57.755320  874848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:35:57.774750  874848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:35:57.786925  874848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:35:57.878521  874848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:35:57.968297  874848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:35:57.980122  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:35:57.996615  874848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 00:35:57.996733  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.007909  874848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:35:58.008088  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.019602  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.030797  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.040901  874848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:35:58.051366  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.061514  874848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.077469  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.087600  874848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:35:58.096431  874848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:35:58.104922  874848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:35:58.194238  874848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:35:58.315200  874848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:35:58.315290  874848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:35:58.319115  874848 start.go:563] Will wait 60s for crictl version
	I0914 00:35:58.319183  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:35:58.322590  874848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:35:58.360321  874848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 00:35:58.360485  874848 ssh_runner.go:195] Run: crio --version
	I0914 00:35:58.401355  874848 ssh_runner.go:195] Run: crio --version
	I0914 00:35:58.441347  874848 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0914 00:35:58.443849  874848 cli_runner.go:164] Run: docker network inspect addons-885748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 00:35:58.459835  874848 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 00:35:58.463371  874848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:35:58.473895  874848 kubeadm.go:883] updating cluster {Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:35:58.474017  874848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:58.474077  874848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:35:58.547909  874848 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:35:58.547932  874848 crio.go:433] Images already preloaded, skipping extraction
	I0914 00:35:58.547987  874848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:35:58.584064  874848 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:35:58.584085  874848 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:35:58.584094  874848 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0914 00:35:58.584187  874848 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-885748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:35:58.584272  874848 ssh_runner.go:195] Run: crio config
	I0914 00:35:58.630750  874848 cni.go:84] Creating CNI manager for ""
	I0914 00:35:58.630773  874848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:35:58.630784  874848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:35:58.630808  874848 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-885748 NodeName:addons-885748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:35:58.630990  874848 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-885748"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:35:58.631062  874848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:35:58.639996  874848 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:35:58.640108  874848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:35:58.648765  874848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0914 00:35:58.666409  874848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:35:58.684328  874848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0914 00:35:58.702308  874848 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 00:35:58.705701  874848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:35:58.716106  874848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:35:58.806646  874848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:35:58.820194  874848 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748 for IP: 192.168.49.2
	I0914 00:35:58.820228  874848 certs.go:194] generating shared ca certs ...
	I0914 00:35:58.820260  874848 certs.go:226] acquiring lock for ca certs: {Name:mk51aad7f25871620dee3805dbb159a74d927d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:58.821048  874848 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key
	I0914 00:35:59.115008  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt ...
	I0914 00:35:59.115046  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt: {Name:mk7e420a6f4116f40ba205310e9949cc0a07cff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.115273  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key ...
	I0914 00:35:59.115289  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key: {Name:mk6495fd05c501516a1dbc6a3c5a3d111749eaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.115383  874848 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key
	I0914 00:35:59.669563  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt ...
	I0914 00:35:59.669645  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt: {Name:mk74326826b78a79963a2466e661d640c5de6beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.670798  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key ...
	I0914 00:35:59.670831  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key: {Name:mkaa14c9fcec32cffb1eac0dcfd1682b507c2fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.671658  874848 certs.go:256] generating profile certs ...
	I0914 00:35:59.671756  874848 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.key
	I0914 00:35:59.671786  874848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt with IP's: []
	I0914 00:36:00.652822  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt ...
	I0914 00:36:00.652865  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: {Name:mk1fbf9bed840a2d57fd0d4fd8e94a75ab019179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.653669  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.key ...
	I0914 00:36:00.653689  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.key: {Name:mkbe4a15da3a2ff3d45a92e0a1634742aa384a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.654315  874848 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37
	I0914 00:36:00.654340  874848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0914 00:36:00.819327  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37 ...
	I0914 00:36:00.819359  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37: {Name:mk886299dc91db0af4189545598b67789e917e31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.820194  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37 ...
	I0914 00:36:00.820213  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37: {Name:mk8b5789c23e69638787fc7a9959d1efbdaf2020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.820297  874848 certs.go:381] copying /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37 -> /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt
	I0914 00:36:00.820377  874848 certs.go:385] copying /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37 -> /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key
	I0914 00:36:00.820432  874848 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key
	I0914 00:36:00.820453  874848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt with IP's: []
	I0914 00:36:01.002520  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt ...
	I0914 00:36:01.002560  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt: {Name:mkb7b3d55ccc68a6a5b5150959ff889ebad35b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:01.002757  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key ...
	I0914 00:36:01.002770  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key: {Name:mk1ef6af0211d101b3583380a03915d2b95c5f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:01.003925  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 00:36:01.003979  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:36:01.004010  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:36:01.004036  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem (1679 bytes)
	I0914 00:36:01.004717  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:36:01.031940  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 00:36:01.056903  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:36:01.083162  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:36:01.111392  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 00:36:01.147185  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:36:01.177160  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:36:01.205911  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:36:01.233312  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:36:01.259299  874848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:36:01.278922  874848 ssh_runner.go:195] Run: openssl version
	I0914 00:36:01.284640  874848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:36:01.296674  874848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:36:01.300328  874848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 00:35 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:36:01.300461  874848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:36:01.307711  874848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:36:01.317582  874848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:36:01.320964  874848 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 00:36:01.321016  874848 kubeadm.go:392] StartCluster: {Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:36:01.321148  874848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:36:01.321218  874848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:36:01.359180  874848 cri.go:89] found id: ""
	I0914 00:36:01.359294  874848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 00:36:01.368589  874848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 00:36:01.378278  874848 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0914 00:36:01.378369  874848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:36:01.388604  874848 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:36:01.388628  874848 kubeadm.go:157] found existing configuration files:
	
	I0914 00:36:01.388687  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:36:01.397916  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:36:01.398044  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:36:01.407970  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:36:01.418575  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:36:01.418702  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:36:01.428387  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:36:01.437829  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:36:01.437915  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:36:01.446922  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:36:01.456143  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:36:01.456266  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:36:01.465388  874848 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 00:36:01.505666  874848 kubeadm.go:310] W0914 00:36:01.504983    1183 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:36:01.506942  874848 kubeadm.go:310] W0914 00:36:01.506340    1183 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:36:01.533882  874848 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0914 00:36:01.596564  874848 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:36:18.463208  874848 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 00:36:18.463272  874848 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 00:36:18.463364  874848 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0914 00:36:18.463422  874848 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0914 00:36:18.463465  874848 kubeadm.go:310] OS: Linux
	I0914 00:36:18.463513  874848 kubeadm.go:310] CGROUPS_CPU: enabled
	I0914 00:36:18.463569  874848 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0914 00:36:18.463623  874848 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0914 00:36:18.463685  874848 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0914 00:36:18.463738  874848 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0914 00:36:18.463797  874848 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0914 00:36:18.463846  874848 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0914 00:36:18.463898  874848 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0914 00:36:18.463954  874848 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0914 00:36:18.464031  874848 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 00:36:18.464129  874848 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 00:36:18.464222  874848 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 00:36:18.464287  874848 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 00:36:18.468842  874848 out.go:235]   - Generating certificates and keys ...
	I0914 00:36:18.468938  874848 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 00:36:18.469010  874848 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 00:36:18.469086  874848 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 00:36:18.469154  874848 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 00:36:18.469218  874848 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 00:36:18.469284  874848 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 00:36:18.469347  874848 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 00:36:18.469472  874848 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-885748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 00:36:18.469529  874848 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 00:36:18.469646  874848 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-885748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 00:36:18.469714  874848 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 00:36:18.469780  874848 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 00:36:18.469827  874848 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 00:36:18.469885  874848 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 00:36:18.469939  874848 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 00:36:18.469998  874848 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 00:36:18.470057  874848 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 00:36:18.470123  874848 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 00:36:18.470180  874848 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 00:36:18.470264  874848 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 00:36:18.470332  874848 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 00:36:18.472924  874848 out.go:235]   - Booting up control plane ...
	I0914 00:36:18.473034  874848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 00:36:18.473114  874848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 00:36:18.473210  874848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 00:36:18.473402  874848 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 00:36:18.473492  874848 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 00:36:18.473540  874848 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 00:36:18.473674  874848 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 00:36:18.473785  874848 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 00:36:18.473846  874848 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000717079s
	I0914 00:36:18.473919  874848 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 00:36:18.473979  874848 kubeadm.go:310] [api-check] The API server is healthy after 6.001506819s
	I0914 00:36:18.474086  874848 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 00:36:18.474212  874848 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 00:36:18.474272  874848 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 00:36:18.474458  874848 kubeadm.go:310] [mark-control-plane] Marking the node addons-885748 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 00:36:18.474517  874848 kubeadm.go:310] [bootstrap-token] Using token: d5jq5w.vhxle95wpku6sua3
	I0914 00:36:18.477217  874848 out.go:235]   - Configuring RBAC rules ...
	I0914 00:36:18.477426  874848 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 00:36:18.477516  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 00:36:18.477659  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 00:36:18.477798  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 00:36:18.477917  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 00:36:18.478005  874848 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 00:36:18.478122  874848 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 00:36:18.478169  874848 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 00:36:18.478217  874848 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 00:36:18.478225  874848 kubeadm.go:310] 
	I0914 00:36:18.478284  874848 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 00:36:18.478295  874848 kubeadm.go:310] 
	I0914 00:36:18.478372  874848 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 00:36:18.478380  874848 kubeadm.go:310] 
	I0914 00:36:18.478405  874848 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 00:36:18.478483  874848 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 00:36:18.478539  874848 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 00:36:18.478547  874848 kubeadm.go:310] 
	I0914 00:36:18.478601  874848 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 00:36:18.478608  874848 kubeadm.go:310] 
	I0914 00:36:18.478659  874848 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 00:36:18.478666  874848 kubeadm.go:310] 
	I0914 00:36:18.478718  874848 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 00:36:18.478796  874848 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 00:36:18.478865  874848 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 00:36:18.478872  874848 kubeadm.go:310] 
	I0914 00:36:18.478956  874848 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 00:36:18.479036  874848 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 00:36:18.479043  874848 kubeadm.go:310] 
	I0914 00:36:18.479127  874848 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d5jq5w.vhxle95wpku6sua3 \
	I0914 00:36:18.479234  874848 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57751d36d4a8735ba13dc9bb14d661ba8c23675462a620d84c252b50ebcb21ac \
	I0914 00:36:18.479257  874848 kubeadm.go:310] 	--control-plane 
	I0914 00:36:18.479264  874848 kubeadm.go:310] 
	I0914 00:36:18.479348  874848 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 00:36:18.479356  874848 kubeadm.go:310] 
	I0914 00:36:18.479437  874848 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d5jq5w.vhxle95wpku6sua3 \
	I0914 00:36:18.479556  874848 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57751d36d4a8735ba13dc9bb14d661ba8c23675462a620d84c252b50ebcb21ac 
	I0914 00:36:18.479573  874848 cni.go:84] Creating CNI manager for ""
	I0914 00:36:18.479580  874848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:36:18.482414  874848 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 00:36:18.485202  874848 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 00:36:18.488984  874848 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0914 00:36:18.489018  874848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0914 00:36:18.507633  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 00:36:18.797119  874848 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:36:18.797283  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:18.797372  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-885748 minikube.k8s.io/updated_at=2024_09_14T00_36_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=addons-885748 minikube.k8s.io/primary=true
	I0914 00:36:18.977875  874848 ops.go:34] apiserver oom_adj: -16
	I0914 00:36:18.977984  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:19.478709  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:19.978932  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:20.478468  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:20.978465  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:21.478838  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:21.978427  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:22.478979  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:22.570954  874848 kubeadm.go:1113] duration metric: took 3.773734503s to wait for elevateKubeSystemPrivileges
	I0914 00:36:22.570993  874848 kubeadm.go:394] duration metric: took 21.249981733s to StartCluster
	I0914 00:36:22.571028  874848 settings.go:142] acquiring lock: {Name:mk58b1b9b697202ac4a931cd839962dd8a5a8fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:22.571754  874848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 00:36:22.572140  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/kubeconfig: {Name:mk4bce51b3b1a0b5e086688a43a01615410b8350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:22.572375  874848 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:36:22.572521  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 00:36:22.572784  874848 config.go:182] Loaded profile config "addons-885748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:36:22.572823  874848 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 00:36:22.572908  874848 addons.go:69] Setting yakd=true in profile "addons-885748"
	I0914 00:36:22.572928  874848 addons.go:234] Setting addon yakd=true in "addons-885748"
	I0914 00:36:22.572954  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.573611  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.573704  874848 addons.go:69] Setting inspektor-gadget=true in profile "addons-885748"
	I0914 00:36:22.573723  874848 addons.go:234] Setting addon inspektor-gadget=true in "addons-885748"
	I0914 00:36:22.573749  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.574184  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.574511  874848 addons.go:69] Setting cloud-spanner=true in profile "addons-885748"
	I0914 00:36:22.574548  874848 addons.go:234] Setting addon cloud-spanner=true in "addons-885748"
	I0914 00:36:22.574580  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.574991  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.576584  874848 addons.go:69] Setting metrics-server=true in profile "addons-885748"
	I0914 00:36:22.576658  874848 addons.go:234] Setting addon metrics-server=true in "addons-885748"
	I0914 00:36:22.576806  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.577674  874848 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-885748"
	I0914 00:36:22.577697  874848 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-885748"
	I0914 00:36:22.577728  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.578157  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.578583  874848 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-885748"
	I0914 00:36:22.578673  874848 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-885748"
	I0914 00:36:22.578735  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.579310  874848 addons.go:69] Setting default-storageclass=true in profile "addons-885748"
	I0914 00:36:22.579360  874848 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-885748"
	I0914 00:36:22.579654  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.584636  874848 addons.go:69] Setting registry=true in profile "addons-885748"
	I0914 00:36:22.584679  874848 addons.go:234] Setting addon registry=true in "addons-885748"
	I0914 00:36:22.584722  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.585212  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.593639  874848 addons.go:69] Setting gcp-auth=true in profile "addons-885748"
	I0914 00:36:22.593697  874848 mustload.go:65] Loading cluster: addons-885748
	I0914 00:36:22.593833  874848 addons.go:69] Setting ingress=true in profile "addons-885748"
	I0914 00:36:22.593875  874848 addons.go:234] Setting addon ingress=true in "addons-885748"
	I0914 00:36:22.593947  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.594556  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.598655  874848 addons.go:69] Setting storage-provisioner=true in profile "addons-885748"
	I0914 00:36:22.598696  874848 addons.go:234] Setting addon storage-provisioner=true in "addons-885748"
	I0914 00:36:22.598738  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.599322  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.608030  874848 addons.go:69] Setting ingress-dns=true in profile "addons-885748"
	I0914 00:36:22.608066  874848 addons.go:234] Setting addon ingress-dns=true in "addons-885748"
	I0914 00:36:22.608124  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.608719  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.626438  874848 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-885748"
	I0914 00:36:22.626484  874848 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-885748"
	I0914 00:36:22.627015  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.646668  874848 out.go:177] * Verifying Kubernetes components...
	I0914 00:36:22.646964  874848 addons.go:69] Setting volcano=true in profile "addons-885748"
	I0914 00:36:22.646995  874848 addons.go:234] Setting addon volcano=true in "addons-885748"
	I0914 00:36:22.647044  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.647621  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.705055  874848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:36:22.647935  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.727451  874848 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 00:36:22.730961  874848 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 00:36:22.731026  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 00:36:22.731127  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.648200  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.654108  874848 config.go:182] Loaded profile config "addons-885748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:36:22.761663  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.769083  874848 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 00:36:22.764527  874848 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-885748"
	I0914 00:36:22.769445  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.769907  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.656713  874848 addons.go:69] Setting volumesnapshots=true in profile "addons-885748"
	I0914 00:36:22.779473  874848 addons.go:234] Setting addon volumesnapshots=true in "addons-885748"
	I0914 00:36:22.779518  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.779996  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.792421  874848 addons.go:234] Setting addon default-storageclass=true in "addons-885748"
	I0914 00:36:22.792474  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.793016  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.798974  874848 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 00:36:22.798996  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 00:36:22.799056  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.807827  874848 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 00:36:22.808051  874848 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 00:36:22.813221  874848 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 00:36:22.818788  874848 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:36:22.821693  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:36:22.821715  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 00:36:22.836132  874848 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 00:36:22.836202  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.857667  874848 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:36:22.857695  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 00:36:22.857766  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.833063  874848 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 00:36:22.861715  874848 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 00:36:22.861794  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.901334  874848 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 00:36:22.901725  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:36:22.925512  874848 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 00:36:22.925579  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 00:36:22.925682  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.938211  874848 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	W0914 00:36:22.945665  874848 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0914 00:36:22.970824  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0914 00:36:22.981537  874848 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 00:36:22.981638  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0914 00:36:22.981747  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.948860  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 00:36:23.001198  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.002556  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:23.010390  874848 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 00:36:23.010488  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 00:36:23.010589  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.015908  874848 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 00:36:23.020213  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 00:36:23.020313  874848 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 00:36:23.020408  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.010624  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 00:36:23.028044  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 00:36:23.029030  874848 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 00:36:23.029111  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 00:36:23.030953  874848 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 00:36:23.031021  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.030844  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.040780  874848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:36:23.041521  874848 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 00:36:23.041540  874848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 00:36:23.041615  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.043116  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 00:36:23.045748  874848 out.go:177]   - Using image docker.io/busybox:stable
	I0914 00:36:23.057729  874848 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 00:36:23.057758  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 00:36:23.057830  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.064630  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 00:36:23.067439  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 00:36:23.070090  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 00:36:23.072657  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 00:36:23.077385  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 00:36:23.080086  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 00:36:23.082722  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 00:36:23.082752  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 00:36:23.082824  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.102635  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.164403  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.164493  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.181637  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.210074  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.221553  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.231587  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.243309  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.245585  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.246302  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.254741  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.413347  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 00:36:23.499318  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 00:36:23.563619  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 00:36:23.563646  874848 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 00:36:23.621774  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:36:23.632161  874848 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 00:36:23.632237  874848 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 00:36:23.658865  874848 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 00:36:23.658965  874848 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 00:36:23.678416  874848 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 00:36:23.678502  874848 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 00:36:23.687807  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 00:36:23.690302  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 00:36:23.690386  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 00:36:23.692343  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 00:36:23.741847  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 00:36:23.741869  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 00:36:23.753765  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 00:36:23.783717  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 00:36:23.783739  874848 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 00:36:23.798983  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 00:36:23.801823  874848 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 00:36:23.801892  874848 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 00:36:23.865228  874848 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 00:36:23.865305  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 00:36:23.896358  874848 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 00:36:23.896422  874848 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 00:36:23.908740  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 00:36:23.908812  874848 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 00:36:23.912605  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 00:36:23.912686  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 00:36:23.950755  874848 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 00:36:23.950831  874848 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 00:36:23.989143  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 00:36:23.989215  874848 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 00:36:24.045999  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 00:36:24.067005  874848 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 00:36:24.067084  874848 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 00:36:24.094537  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 00:36:24.094616  874848 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 00:36:24.121448  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 00:36:24.121549  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 00:36:24.152405  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 00:36:24.152477  874848 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 00:36:24.187995  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 00:36:24.188063  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 00:36:24.248301  874848 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 00:36:24.248379  874848 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 00:36:24.263656  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 00:36:24.263745  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 00:36:24.270468  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 00:36:24.279636  874848 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:36:24.279710  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 00:36:24.361088  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:36:24.363766  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 00:36:24.391930  874848 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 00:36:24.392005  874848 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 00:36:24.404762  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 00:36:24.404847  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 00:36:24.532083  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 00:36:24.532154  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 00:36:24.541629  874848 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 00:36:24.541706  874848 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 00:36:24.586534  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 00:36:24.586608  874848 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 00:36:24.613471  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 00:36:24.613547  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 00:36:24.635565  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 00:36:24.635636  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 00:36:24.636353  874848 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 00:36:24.636397  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 00:36:24.697087  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 00:36:24.718922  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 00:36:24.719002  874848 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 00:36:24.797070  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 00:36:26.012706  874848 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.019755946s)
	I0914 00:36:26.012793  874848 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 00:36:26.013955  874848 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.973152264s)
	I0914 00:36:26.015246  874848 node_ready.go:35] waiting up to 6m0s for node "addons-885748" to be "Ready" ...
	I0914 00:36:26.818351  874848 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-885748" context rescaled to 1 replicas
	I0914 00:36:27.031638  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.618251032s)
	I0914 00:36:27.031747  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.532359077s)
	I0914 00:36:27.943561  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.321702676s)
	I0914 00:36:28.026391  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:29.167249  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.4793608s)
	I0914 00:36:29.167283  874848 addons.go:475] Verifying addon ingress=true in "addons-885748"
	I0914 00:36:29.167350  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.47493769s)
	I0914 00:36:29.167560  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.413773964s)
	I0914 00:36:29.167638  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.36858762s)
	I0914 00:36:29.167755  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.121680395s)
	I0914 00:36:29.167774  874848 addons.go:475] Verifying addon registry=true in "addons-885748"
	I0914 00:36:29.168314  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.897753986s)
	I0914 00:36:29.168337  874848 addons.go:475] Verifying addon metrics-server=true in "addons-885748"
	I0914 00:36:29.170573  874848 out.go:177] * Verifying ingress addon...
	I0914 00:36:29.170589  874848 out.go:177] * Verifying registry addon...
	I0914 00:36:29.173514  874848 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 00:36:29.174597  874848 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 00:36:29.186358  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.825179108s)
	W0914 00:36:29.186394  874848 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 00:36:29.186416  874848 retry.go:31] will retry after 308.598821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 00:36:29.186470  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.822639917s)
	I0914 00:36:29.186790  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.489573307s)
	I0914 00:36:29.190776  874848 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-885748 service yakd-dashboard -n yakd-dashboard
	
	I0914 00:36:29.212739  874848 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 00:36:29.212822  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0914 00:36:29.216216  874848 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0914 00:36:29.217682  874848 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 00:36:29.217744  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:29.495756  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:36:29.512701  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.715530225s)
	I0914 00:36:29.512744  874848 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-885748"
	I0914 00:36:29.515691  874848 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 00:36:29.519486  874848 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 00:36:29.539384  874848 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 00:36:29.539410  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:29.680191  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:29.681532  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:30.038990  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:30.044266  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:30.207217  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:30.207796  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:30.532034  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:30.684159  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:30.690166  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:30.825068  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.329257891s)
	I0914 00:36:31.024298  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:31.191902  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:31.193352  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:31.523717  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:31.679556  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:31.679786  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:32.025958  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:32.177305  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:32.178350  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:32.520588  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:32.524000  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:32.680213  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:32.680810  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:33.030499  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:33.183678  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:33.184449  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:33.377273  874848 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 00:36:33.377351  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:33.401444  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:33.523097  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:33.527304  874848 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 00:36:33.559831  874848 addons.go:234] Setting addon gcp-auth=true in "addons-885748"
	I0914 00:36:33.559879  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:33.560345  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:33.581054  874848 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 00:36:33.581121  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:33.600889  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:33.680155  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:33.681074  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:33.693949  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:36:33.696552  874848 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 00:36:33.699092  874848 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 00:36:33.699119  874848 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 00:36:33.725491  874848 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 00:36:33.725513  874848 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 00:36:33.757659  874848 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 00:36:33.757696  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 00:36:33.780450  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 00:36:34.023974  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:34.184181  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:34.186273  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:34.384150  874848 addons.go:475] Verifying addon gcp-auth=true in "addons-885748"
	I0914 00:36:34.387248  874848 out.go:177] * Verifying gcp-auth addon...
	I0914 00:36:34.390886  874848 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 00:36:34.407033  874848 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 00:36:34.407059  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:34.523397  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:34.677151  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:34.678963  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:34.894681  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:35.022306  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:35.024438  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:35.180032  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:35.183480  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:35.394703  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:35.523865  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:35.678515  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:35.678806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:35.894818  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:36.023008  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:36.177908  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:36.178956  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:36.394280  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:36.523510  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:36.678580  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:36.679226  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:36.894421  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:37.022844  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:37.024157  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:37.179135  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:37.179370  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:37.394553  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:37.524074  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:37.678080  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:37.679783  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:37.894034  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:38.024946  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:38.177683  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:38.179187  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:38.394540  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:38.522543  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:38.677451  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:38.679196  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:38.894643  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:39.024177  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:39.176814  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:39.178126  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:39.394403  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:39.518971  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:39.523042  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:39.677285  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:39.678415  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:39.894993  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:40.023302  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:40.177076  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:40.179028  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:40.394726  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:40.523300  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:40.678856  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:40.679285  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:40.894199  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:41.023013  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:41.177101  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:41.179290  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:41.394521  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:41.523390  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:41.677524  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:41.678904  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:41.894216  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:42.019193  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:42.023697  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:42.179722  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:42.181177  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:42.394876  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:42.523050  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:42.678341  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:42.679685  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:42.894916  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:43.023233  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:43.178682  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:43.179104  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:43.393946  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:43.523203  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:43.678371  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:43.679407  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:43.894754  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:44.026113  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:44.026149  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:44.177508  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:44.178927  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:44.394341  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:44.523594  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:44.676754  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:44.678698  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:44.893862  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:45.036741  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:45.178582  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:45.179375  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:45.393987  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:45.523508  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:45.677652  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:45.679385  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:45.894863  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:46.022919  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:46.177463  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:46.179089  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:46.394354  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:46.519328  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:46.523445  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:46.681121  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:46.681456  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:46.894381  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:47.028265  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:47.176495  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:47.178087  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:47.394423  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:47.522613  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:47.678292  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:47.679289  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:47.894860  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:48.024213  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:48.179671  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:48.179824  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:48.394247  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:48.519429  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:48.523282  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:48.678115  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:48.678922  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:48.894375  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:49.023908  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:49.177643  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:49.178596  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:49.394010  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:49.522523  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:49.676673  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:49.678634  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:49.893972  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:50.022979  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:50.177122  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:50.178955  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:50.394118  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:50.522454  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:50.677309  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:50.679281  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:50.894463  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:51.018819  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:51.023156  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:51.176878  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:51.178776  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:51.394280  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:51.523133  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:51.677466  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:51.679143  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:51.894631  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:52.023396  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:52.178391  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:52.179266  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:52.397286  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:52.522690  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:52.678357  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:52.679217  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:52.894486  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:53.019119  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:53.023307  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:53.179178  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:53.180677  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:53.394368  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:53.523183  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:53.676964  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:53.678364  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:53.894958  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:54.023821  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:54.177805  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:54.179585  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:54.394992  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:54.522425  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:54.676902  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:54.679155  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:54.894649  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:55.019727  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:55.023310  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:55.178226  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:55.178305  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:55.394779  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:55.522925  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:55.678915  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:55.679410  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:55.894279  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:56.023547  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:56.177234  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:56.178937  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:56.394264  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:56.523247  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:56.677609  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:56.678834  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:56.894510  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:57.023730  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:57.176949  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:57.180346  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:57.394998  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:57.519266  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:57.522884  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:57.677959  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:57.678792  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:57.894825  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:58.023518  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:58.178392  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:58.179460  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:58.394911  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:58.522691  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:58.677369  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:58.678790  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:58.895085  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:59.022531  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:59.177915  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:59.178932  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:59.394803  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:59.522983  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:59.677060  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:59.678616  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:59.894389  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:00.020453  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:00.066148  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:00.187116  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:00.189373  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:00.395382  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:00.523090  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:00.677052  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:00.679313  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:00.894619  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:01.022440  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:01.177549  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:01.179048  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:01.394471  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:01.522894  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:01.678225  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:01.679589  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:01.894931  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:02.023563  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:02.176988  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:02.179706  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:02.394268  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:02.519079  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:02.522948  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:02.676989  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:02.679144  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:02.896226  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:03.022882  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:03.177959  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:03.179565  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:03.394297  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:03.523549  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:03.677616  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:03.679072  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:03.894507  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:04.023469  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:04.177349  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:04.179232  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:04.393784  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:04.522550  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:04.676854  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:04.678639  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:04.895316  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:05.018889  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:05.023416  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:05.177247  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:05.179114  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:05.394990  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:05.522362  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:05.678794  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:05.678970  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:05.893966  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:06.023096  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:06.177160  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:06.177654  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:06.394767  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:06.522308  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:06.678541  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:06.679024  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:06.894066  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:07.019192  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:07.023773  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:07.177818  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:07.178447  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:07.396991  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:07.536462  874848 node_ready.go:49] node "addons-885748" has status "Ready":"True"
	I0914 00:37:07.536540  874848 node_ready.go:38] duration metric: took 41.52122498s for node "addons-885748" to be "Ready" ...
	I0914 00:37:07.536564  874848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:37:07.545962  874848 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 00:37:07.545989  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:07.560429  874848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8m89r" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:07.735954  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:07.737075  874848 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 00:37:07.737140  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:07.904390  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:08.025045  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:08.203301  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:08.204003  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:08.398762  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:08.524366  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:08.680865  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:08.681279  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:08.900177  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:09.025214  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:09.185641  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:09.187308  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:09.397596  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:09.524714  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:09.567577  874848 pod_ready.go:93] pod "coredns-7c65d6cfc9-8m89r" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.567609  874848 pod_ready.go:82] duration metric: took 2.007088321s for pod "coredns-7c65d6cfc9-8m89r" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.567631  874848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.579438  874848 pod_ready.go:93] pod "etcd-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.579467  874848 pod_ready.go:82] duration metric: took 11.821727ms for pod "etcd-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.579484  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.585364  874848 pod_ready.go:93] pod "kube-apiserver-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.585385  874848 pod_ready.go:82] duration metric: took 5.89278ms for pod "kube-apiserver-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.585397  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.592284  874848 pod_ready.go:93] pod "kube-controller-manager-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.592307  874848 pod_ready.go:82] duration metric: took 6.902865ms for pod "kube-controller-manager-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.592321  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dqs2h" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.602562  874848 pod_ready.go:93] pod "kube-proxy-dqs2h" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.602588  874848 pod_ready.go:82] duration metric: took 10.259695ms for pod "kube-proxy-dqs2h" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.602600  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.681569  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:09.682934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:09.897633  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:09.965408  874848 pod_ready.go:93] pod "kube-scheduler-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.965433  874848 pod_ready.go:82] duration metric: took 362.810925ms for pod "kube-scheduler-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.965445  874848 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:10.026493  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:10.179971  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:10.182101  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:10.395509  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:10.526262  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:10.677859  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:10.679418  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:10.895078  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:11.025168  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:11.178262  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:11.178738  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:11.395621  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:11.524715  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:11.679184  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:11.679849  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:11.895381  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:11.971662  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:12.025550  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:12.181156  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:12.182835  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:12.394926  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:12.531168  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:12.679085  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:12.680606  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:12.895370  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:13.024873  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:13.177451  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:13.180418  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:13.395380  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:13.525613  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:13.680824  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:13.681895  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:13.895500  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:14.025845  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:14.179688  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:14.180935  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:14.394764  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:14.471939  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:14.524071  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:14.677438  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:14.679890  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:14.894272  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:15.025996  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:15.178028  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:15.180621  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:15.395485  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:15.523999  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:15.678417  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:15.678947  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:15.894558  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:16.025025  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:16.178905  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:16.180441  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:16.395296  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:16.473887  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:16.525561  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:16.678894  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:16.680480  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:16.896742  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:17.026245  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:17.182060  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:17.184538  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:17.395416  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:17.526035  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:17.681664  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:17.683402  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:17.895817  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:18.025795  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:18.181775  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:18.181945  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:18.395803  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:18.524488  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:18.677526  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:18.681893  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:18.894318  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:18.972325  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:19.024913  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:19.178927  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:19.181186  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:19.394794  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:19.524419  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:19.679432  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:19.680935  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:19.894744  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:20.024634  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:20.178560  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:20.179605  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:20.394255  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:20.524521  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:20.676974  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:20.684973  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:20.894834  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:21.025765  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:21.180351  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:21.181362  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:21.399049  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:21.475732  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:21.528175  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:21.680488  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:21.681930  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:21.894250  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:22.024817  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:22.177928  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:22.179422  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:22.394499  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:22.525146  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:22.678627  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:22.680308  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:22.895246  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:23.025863  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:23.177031  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:23.180339  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:23.396492  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:23.524470  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:23.678638  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:23.679382  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:23.895207  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:23.977712  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:24.029304  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:24.180362  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:24.183282  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:24.396641  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:24.529357  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:24.682468  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:24.684392  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:24.895280  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:25.025730  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:25.181572  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:25.183727  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:25.395405  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:25.524566  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:25.682333  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:25.683779  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:25.901528  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:25.980022  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:26.035812  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:26.184465  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:26.185902  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:26.399183  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:26.525590  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:26.684422  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:26.685595  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:26.895348  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:27.024667  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:27.178206  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:27.179539  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:27.395704  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:27.525158  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:27.679873  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:27.680479  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:27.897323  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:28.024852  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:28.179057  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:28.179565  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:28.394541  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:28.471727  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:28.524321  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:28.679544  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:28.680139  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:28.894850  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:29.024419  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:29.179889  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:29.180105  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:29.394579  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:29.525631  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:29.677384  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:29.679484  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:29.895140  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:30.039214  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:30.181482  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:30.191420  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:30.394455  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:30.472915  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:30.527806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:30.682674  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:30.686718  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:30.894383  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:31.025227  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:31.179957  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:31.181007  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:31.394945  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:31.524555  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:31.677768  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:31.680475  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:31.894695  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:32.025054  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:32.177978  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:32.179683  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:32.395007  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:32.524604  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:32.678592  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:32.679923  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:32.894812  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:32.973284  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:33.026118  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:33.180514  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:33.181997  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:33.394813  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:33.525731  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:33.680977  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:33.682906  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:33.895068  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:34.025233  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:34.179148  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:34.182917  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:34.395651  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:34.526147  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:34.684709  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:34.686035  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:34.895105  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:35.026583  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:35.183369  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:35.185238  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:35.394700  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:35.472721  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:35.525329  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:35.680355  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:35.681672  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:35.894138  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:36.025921  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:36.180639  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:36.181945  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:36.395181  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:36.525218  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:36.683856  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:36.688534  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:36.895037  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:37.026844  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:37.180732  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:37.181806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:37.395037  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:37.473242  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:37.525608  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:37.685407  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:37.688853  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:37.896431  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:38.026407  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:38.178835  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:38.180018  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:38.395082  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:38.524569  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:38.679237  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:38.680243  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:38.895023  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:39.024964  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:39.178384  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:39.180349  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:39.394794  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:39.524736  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:39.679043  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:39.680200  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:39.894788  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:39.972306  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:40.026022  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:40.178841  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:40.180531  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:40.396615  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:40.526316  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:40.679017  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:40.681396  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:40.895111  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:41.025310  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:41.180533  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:41.182040  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:41.394933  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:41.525086  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:41.678595  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:41.681805  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:41.894264  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:41.972495  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:42.027746  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:42.181385  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:42.183231  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:42.395231  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:42.525359  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:42.686622  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:42.687690  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:42.894396  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:43.026528  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:43.178411  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:43.180614  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:43.394781  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:43.526157  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:43.678825  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:43.680171  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:43.894755  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:43.974886  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:44.025244  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:44.180632  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:44.180874  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:44.394573  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:44.525492  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:44.679244  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:44.680033  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:44.894811  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:45.026849  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:45.180224  874848 kapi.go:107] duration metric: took 1m16.006705195s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 00:37:45.181335  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:45.394742  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:45.524737  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:45.678853  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:45.894538  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:46.025270  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:46.180187  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:46.420542  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:46.473873  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:46.527047  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:46.690644  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:46.895272  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:47.025191  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:47.180081  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:47.394774  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:47.524580  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:47.679051  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:47.895292  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:48.028400  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:48.181125  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:48.395824  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:48.526249  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:48.680363  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:48.894934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:48.973378  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:49.024934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:49.180049  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:49.394655  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:49.525575  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:49.678950  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:49.895006  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:50.027602  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:50.180508  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:50.395361  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:50.525087  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:50.679749  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:50.894761  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:50.974000  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:51.026760  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:51.180269  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:51.395059  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:51.525416  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:51.678919  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:51.895522  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:52.025040  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:52.179046  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:52.394934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:52.524525  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:52.679988  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:52.894201  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:53.024676  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:53.179453  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:53.394512  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:53.471888  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:53.523864  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:53.679106  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:53.896220  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:54.024917  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:54.180335  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:54.396636  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:54.526135  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:54.679867  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:54.912541  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:55.034071  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:55.179674  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:55.395396  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:55.473836  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:55.528494  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:55.680286  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:55.895006  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:56.027576  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:56.179160  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:56.398064  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:56.525392  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:56.680283  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:56.895453  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:57.024877  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:57.179495  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:57.395302  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:57.526310  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:57.678953  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:57.894929  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:57.978490  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:58.024590  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:58.182030  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:58.396784  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:58.526220  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:58.679161  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:58.894516  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:59.045878  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:59.183420  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:59.395337  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:59.525591  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:59.679994  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:59.896190  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:00.044921  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:00.276537  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:00.395763  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:00.472688  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:00.524316  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:00.679693  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:00.894767  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:01.024436  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:01.182184  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:01.396167  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:01.525666  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:01.679817  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:01.895495  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:02.026019  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:02.180745  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:02.395882  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:02.525241  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:02.679057  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:02.894767  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:02.975993  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:03.025801  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:03.180760  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:03.395339  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:03.526291  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:03.679567  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:03.899232  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:04.024210  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:04.179325  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:04.395706  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:04.524231  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:04.679479  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:04.894905  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:05.027840  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:05.182382  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:05.396023  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:05.473085  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:05.525806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:05.679191  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:05.896374  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:06.029480  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:06.178890  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:06.395107  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:06.525046  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:06.679017  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:06.894377  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:07.024327  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:07.178898  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:07.398532  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:07.473351  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:07.525318  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:07.681140  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:07.894913  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:08.027202  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:08.182979  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:08.395165  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:08.524187  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:08.694704  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:08.895184  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:09.030109  874848 kapi.go:107] duration metric: took 1m39.510623393s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 00:38:09.179285  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:09.395413  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:09.679463  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:09.895033  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:09.973120  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:10.179271  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:10.394612  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:10.679453  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:10.895174  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:11.178632  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:11.395338  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:11.679157  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:11.894373  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:12.179833  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:12.394349  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:12.471290  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:12.679175  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:12.895106  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:13.178590  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:13.396117  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:13.680434  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:13.894967  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:14.180563  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:14.396225  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:14.471905  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:14.679676  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:14.895516  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:15.179205  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:15.396426  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:15.679433  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:15.894213  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:16.179496  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:16.395328  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:16.476000  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:16.680237  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:16.896031  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:17.180049  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:17.395144  874848 kapi.go:107] duration metric: took 1m43.00425795s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 00:38:17.398321  874848 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-885748 cluster.
	I0914 00:38:17.400983  874848 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 00:38:17.403694  874848 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 00:38:17.679164  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:18.180368  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:18.483241  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:18.680385  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:19.184791  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:19.679772  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:20.180317  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:20.680340  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:20.972195  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:21.178702  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:21.679238  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:22.180491  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:22.681102  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:23.189543  874848 kapi.go:107] duration metric: took 1m54.01494077s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 00:38:23.191218  874848 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0914 00:38:23.192745  874848 addons.go:510] duration metric: took 2m0.619913914s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0914 00:38:23.475147  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:25.971975  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:28.477226  874848 pod_ready.go:93] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"True"
	I0914 00:38:28.477369  874848 pod_ready.go:82] duration metric: took 1m18.511914681s for pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace to be "Ready" ...
	I0914 00:38:28.477405  874848 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9nphx" in "kube-system" namespace to be "Ready" ...
	I0914 00:38:28.484642  874848 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9nphx" in "kube-system" namespace has status "Ready":"True"
	I0914 00:38:28.484732  874848 pod_ready.go:82] duration metric: took 7.280703ms for pod "nvidia-device-plugin-daemonset-9nphx" in "kube-system" namespace to be "Ready" ...
	I0914 00:38:28.484774  874848 pod_ready.go:39] duration metric: took 1m20.948183548s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:38:28.484841  874848 api_server.go:52] waiting for apiserver process to appear ...
	I0914 00:38:28.484919  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:38:28.485034  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:38:28.547414  874848 cri.go:89] found id: "48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:28.547485  874848 cri.go:89] found id: ""
	I0914 00:38:28.547506  874848 logs.go:276] 1 containers: [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4]
	I0914 00:38:28.547595  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.551987  874848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:38:28.552116  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:38:28.598910  874848 cri.go:89] found id: "f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:28.598933  874848 cri.go:89] found id: ""
	I0914 00:38:28.598941  874848 logs.go:276] 1 containers: [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7]
	I0914 00:38:28.599013  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.602400  874848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:38:28.602560  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:38:28.644171  874848 cri.go:89] found id: "80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:28.644192  874848 cri.go:89] found id: ""
	I0914 00:38:28.644201  874848 logs.go:276] 1 containers: [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9]
	I0914 00:38:28.644254  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.647972  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:38:28.648065  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:38:28.684644  874848 cri.go:89] found id: "d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:28.684667  874848 cri.go:89] found id: ""
	I0914 00:38:28.684675  874848 logs.go:276] 1 containers: [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289]
	I0914 00:38:28.684761  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.689599  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:38:28.689693  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:38:28.727470  874848 cri.go:89] found id: "a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:28.727491  874848 cri.go:89] found id: ""
	I0914 00:38:28.727499  874848 logs.go:276] 1 containers: [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887]
	I0914 00:38:28.727552  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.731365  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:38:28.731447  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:38:28.771519  874848 cri.go:89] found id: "f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:28.771541  874848 cri.go:89] found id: ""
	I0914 00:38:28.771550  874848 logs.go:276] 1 containers: [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da]
	I0914 00:38:28.771625  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.775121  874848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:38:28.775189  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:38:28.814792  874848 cri.go:89] found id: "56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:28.814816  874848 cri.go:89] found id: ""
	I0914 00:38:28.814824  874848 logs.go:276] 1 containers: [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d]
	I0914 00:38:28.814877  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.818284  874848 logs.go:123] Gathering logs for kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] ...
	I0914 00:38:28.818307  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:28.891320  874848 logs.go:123] Gathering logs for etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] ...
	I0914 00:38:28.891360  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:28.937126  874848 logs.go:123] Gathering logs for coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] ...
	I0914 00:38:28.937157  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:28.983373  874848 logs.go:123] Gathering logs for kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] ...
	I0914 00:38:28.983404  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:29.030599  874848 logs.go:123] Gathering logs for container status ...
	I0914 00:38:29.030626  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:38:29.088803  874848 logs.go:123] Gathering logs for kubelet ...
	I0914 00:38:29.088834  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 00:38:29.133183  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.432207    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.133455  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.432263    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.133676  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.439952    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.133911  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440000    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.134100  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.440181    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.134329  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440215    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.134541  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461184    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.134794  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461241    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.134992  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461318    1502 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.135240  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.135446  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.135709  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.135907  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.136142  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:29.192135  874848 logs.go:123] Gathering logs for dmesg ...
	I0914 00:38:29.192184  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:38:29.210094  874848 logs.go:123] Gathering logs for kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] ...
	I0914 00:38:29.210125  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:29.301224  874848 logs.go:123] Gathering logs for kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] ...
	I0914 00:38:29.301271  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:29.347119  874848 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:38:29.347147  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:38:29.444517  874848 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:38:29.444551  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:38:29.632311  874848 logs.go:123] Gathering logs for kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] ...
	I0914 00:38:29.632339  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:29.679537  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:29.679564  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 00:38:29.679625  874848 out.go:270] X Problems detected in kubelet:
	W0914 00:38:29.679636  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.679644  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.679651  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.679704  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.679712  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:29.679719  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:29.679725  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:38:39.681411  874848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:38:39.695346  874848 api_server.go:72] duration metric: took 2m17.122934524s to wait for apiserver process to appear ...
	I0914 00:38:39.695371  874848 api_server.go:88] waiting for apiserver healthz status ...
	I0914 00:38:39.695407  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:38:39.695463  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:38:39.743999  874848 cri.go:89] found id: "48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:39.744019  874848 cri.go:89] found id: ""
	I0914 00:38:39.744026  874848 logs.go:276] 1 containers: [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4]
	I0914 00:38:39.744108  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.748186  874848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:38:39.748271  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:38:39.786567  874848 cri.go:89] found id: "f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:39.786591  874848 cri.go:89] found id: ""
	I0914 00:38:39.786600  874848 logs.go:276] 1 containers: [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7]
	I0914 00:38:39.786673  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.790106  874848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:38:39.790172  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:38:39.830802  874848 cri.go:89] found id: "80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:39.830825  874848 cri.go:89] found id: ""
	I0914 00:38:39.830832  874848 logs.go:276] 1 containers: [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9]
	I0914 00:38:39.830891  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.834483  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:38:39.834578  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:38:39.873400  874848 cri.go:89] found id: "d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:39.873426  874848 cri.go:89] found id: ""
	I0914 00:38:39.873435  874848 logs.go:276] 1 containers: [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289]
	I0914 00:38:39.873493  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.877489  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:38:39.877568  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:38:39.915990  874848 cri.go:89] found id: "a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:39.916016  874848 cri.go:89] found id: ""
	I0914 00:38:39.916025  874848 logs.go:276] 1 containers: [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887]
	I0914 00:38:39.916112  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.919561  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:38:39.919637  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:38:39.957315  874848 cri.go:89] found id: "f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:39.957383  874848 cri.go:89] found id: ""
	I0914 00:38:39.957405  874848 logs.go:276] 1 containers: [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da]
	I0914 00:38:39.957474  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.960827  874848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:38:39.960894  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:38:40.000698  874848 cri.go:89] found id: "56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:40.000764  874848 cri.go:89] found id: ""
	I0914 00:38:40.000787  874848 logs.go:276] 1 containers: [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d]
	I0914 00:38:40.000868  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:40.009160  874848 logs.go:123] Gathering logs for kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] ...
	I0914 00:38:40.009238  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:40.063889  874848 logs.go:123] Gathering logs for kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] ...
	I0914 00:38:40.063916  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:40.140420  874848 logs.go:123] Gathering logs for container status ...
	I0914 00:38:40.140455  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:38:40.191420  874848 logs.go:123] Gathering logs for kubelet ...
	I0914 00:38:40.191454  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 00:38:40.233432  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.432207    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.233678  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.432263    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.233863  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.439952    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.234086  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440000    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.234255  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.440181    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.234464  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440215    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.234649  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461184    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.234875  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461241    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.235058  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461318    1502 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.235282  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.235469  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.235697  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.235870  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.236085  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:40.287929  874848 logs.go:123] Gathering logs for dmesg ...
	I0914 00:38:40.287960  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:38:40.304167  874848 logs.go:123] Gathering logs for coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] ...
	I0914 00:38:40.304197  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:40.351418  874848 logs.go:123] Gathering logs for kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] ...
	I0914 00:38:40.351450  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:40.405932  874848 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:38:40.405964  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:38:40.500837  874848 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:38:40.500877  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:38:40.647711  874848 logs.go:123] Gathering logs for kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] ...
	I0914 00:38:40.647741  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:40.699610  874848 logs.go:123] Gathering logs for etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] ...
	I0914 00:38:40.699643  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:40.758127  874848 logs.go:123] Gathering logs for kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] ...
	I0914 00:38:40.758155  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:40.808598  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:40.808623  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 00:38:40.808730  874848 out.go:270] X Problems detected in kubelet:
	W0914 00:38:40.808745  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.808772  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.808781  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.808787  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.808793  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:40.808806  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:40.808813  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:38:50.810748  874848 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 00:38:50.820324  874848 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 00:38:50.821343  874848 api_server.go:141] control plane version: v1.31.1
	I0914 00:38:50.821369  874848 api_server.go:131] duration metric: took 11.125990917s to wait for apiserver health ...
	I0914 00:38:50.821379  874848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:38:50.821403  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:38:50.821465  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:38:50.857789  874848 cri.go:89] found id: "48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:50.857812  874848 cri.go:89] found id: ""
	I0914 00:38:50.857820  874848 logs.go:276] 1 containers: [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4]
	I0914 00:38:50.857879  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:50.862216  874848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:38:50.862284  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:38:50.900268  874848 cri.go:89] found id: "f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:50.900291  874848 cri.go:89] found id: ""
	I0914 00:38:50.900299  874848 logs.go:276] 1 containers: [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7]
	I0914 00:38:50.900373  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:50.903842  874848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:38:50.903933  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:38:50.942518  874848 cri.go:89] found id: "80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:50.942541  874848 cri.go:89] found id: ""
	I0914 00:38:50.942549  874848 logs.go:276] 1 containers: [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9]
	I0914 00:38:50.942619  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:50.946096  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:38:50.946185  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:38:51.008164  874848 cri.go:89] found id: "d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:51.008212  874848 cri.go:89] found id: ""
	I0914 00:38:51.008227  874848 logs.go:276] 1 containers: [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289]
	I0914 00:38:51.008295  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.013303  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:38:51.013405  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:38:51.060066  874848 cri.go:89] found id: "a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:51.060149  874848 cri.go:89] found id: ""
	I0914 00:38:51.060172  874848 logs.go:276] 1 containers: [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887]
	I0914 00:38:51.060263  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.064118  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:38:51.064238  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:38:51.110490  874848 cri.go:89] found id: "f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:51.110528  874848 cri.go:89] found id: ""
	I0914 00:38:51.110537  874848 logs.go:276] 1 containers: [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da]
	I0914 00:38:51.110602  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.114745  874848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:38:51.114821  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:38:51.160743  874848 cri.go:89] found id: "56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:51.160763  874848 cri.go:89] found id: ""
	I0914 00:38:51.160771  874848 logs.go:276] 1 containers: [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d]
	I0914 00:38:51.160828  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.164783  874848 logs.go:123] Gathering logs for kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] ...
	I0914 00:38:51.164809  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:51.215849  874848 logs.go:123] Gathering logs for kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] ...
	I0914 00:38:51.215885  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:51.312761  874848 logs.go:123] Gathering logs for kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] ...
	I0914 00:38:51.312793  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:51.353667  874848 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:38:51.353697  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:38:51.448552  874848 logs.go:123] Gathering logs for container status ...
	I0914 00:38:51.448591  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:38:51.500391  874848 logs.go:123] Gathering logs for kubelet ...
	I0914 00:38:51.500420  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 00:38:51.527174  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.432207    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.527480  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.432263    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.527688  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.439952    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.527942  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440000    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.528142  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.440181    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.528385  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440215    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.528603  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461184    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.528866  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461241    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.529094  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461318    1502 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.529366  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.529580  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.529810  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.529984  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.530197  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:51.594195  874848 logs.go:123] Gathering logs for coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] ...
	I0914 00:38:51.594227  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:51.635725  874848 logs.go:123] Gathering logs for kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] ...
	I0914 00:38:51.635758  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:51.704376  874848 logs.go:123] Gathering logs for etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] ...
	I0914 00:38:51.704410  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:51.757616  874848 logs.go:123] Gathering logs for kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] ...
	I0914 00:38:51.757649  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:51.796955  874848 logs.go:123] Gathering logs for dmesg ...
	I0914 00:38:51.796986  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:38:51.815711  874848 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:38:51.815779  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:38:51.950032  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:51.950064  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 00:38:51.950122  874848 out.go:270] X Problems detected in kubelet:
	W0914 00:38:51.950135  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.950143  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.950157  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.950164  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.950177  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:51.950183  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:51.950190  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:39:01.963900  874848 system_pods.go:59] 18 kube-system pods found
	I0914 00:39:01.963965  874848 system_pods.go:61] "coredns-7c65d6cfc9-8m89r" [550228bd-69a1-4530-af98-0200cecdabf1] Running
	I0914 00:39:01.963975  874848 system_pods.go:61] "csi-hostpath-attacher-0" [cbc09b3c-e59c-4698-b6c7-f9d1746ab697] Running
	I0914 00:39:01.964017  874848 system_pods.go:61] "csi-hostpath-resizer-0" [1d0b01fe-048b-4b9e-82dd-5b408414180f] Running
	I0914 00:39:01.964026  874848 system_pods.go:61] "csi-hostpathplugin-mgx77" [456dedd2-11aa-43aa-8f21-e93340384161] Running
	I0914 00:39:01.964031  874848 system_pods.go:61] "etcd-addons-885748" [76fc0bec-b6e2-415d-8c2a-3bdb3f6bf113] Running
	I0914 00:39:01.964035  874848 system_pods.go:61] "kindnet-m55kx" [724646d8-f3df-4b7c-830a-ec84d16dc1c6] Running
	I0914 00:39:01.964040  874848 system_pods.go:61] "kube-apiserver-addons-885748" [c6447df2-c534-4e85-afc8-5da7d2435aa6] Running
	I0914 00:39:01.964045  874848 system_pods.go:61] "kube-controller-manager-addons-885748" [9727b4e8-1fa1-4175-b2ce-7bdd6ac0676c] Running
	I0914 00:39:01.964050  874848 system_pods.go:61] "kube-ingress-dns-minikube" [e6eb7e3a-203d-452a-b040-fbe431e6f08f] Running
	I0914 00:39:01.964054  874848 system_pods.go:61] "kube-proxy-dqs2h" [ad11d9fd-caaa-4026-86f8-aba3e5ac2834] Running
	I0914 00:39:01.964090  874848 system_pods.go:61] "kube-scheduler-addons-885748" [ae7fd70d-d206-474f-a967-53dc9227db19] Running
	I0914 00:39:01.964102  874848 system_pods.go:61] "metrics-server-84c5f94fbc-96xbg" [9c339307-23c2-46f3-af0b-9a4d12c82b32] Running
	I0914 00:39:01.964107  874848 system_pods.go:61] "nvidia-device-plugin-daemonset-9nphx" [8f3b2546-ef55-49b2-8f31-dd8f4ecdcf93] Running
	I0914 00:39:01.964113  874848 system_pods.go:61] "registry-66c9cd494c-bkhkl" [4d931f29-d87c-4bc8-8e58-88b441e56b0a] Running
	I0914 00:39:01.964118  874848 system_pods.go:61] "registry-proxy-fb2vb" [7d63ca1e-f5bf-47eb-84af-ebd01e9cd4b6] Running
	I0914 00:39:01.964127  874848 system_pods.go:61] "snapshot-controller-56fcc65765-8pfcj" [37872304-9181-40b4-8ebf-9958cdc3a7b0] Running
	I0914 00:39:01.964132  874848 system_pods.go:61] "snapshot-controller-56fcc65765-nwsdn" [bb956da0-8552-4d95-a92d-8a7311005caf] Running
	I0914 00:39:01.964136  874848 system_pods.go:61] "storage-provisioner" [c95fe42f-e257-4b52-ab42-54086f64f2e4] Running
	I0914 00:39:01.964143  874848 system_pods.go:74] duration metric: took 11.142756624s to wait for pod list to return data ...
	I0914 00:39:01.964165  874848 default_sa.go:34] waiting for default service account to be created ...
	I0914 00:39:01.967349  874848 default_sa.go:45] found service account: "default"
	I0914 00:39:01.967378  874848 default_sa.go:55] duration metric: took 3.206253ms for default service account to be created ...
	I0914 00:39:01.967389  874848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 00:39:01.979121  874848 system_pods.go:86] 18 kube-system pods found
	I0914 00:39:01.979159  874848 system_pods.go:89] "coredns-7c65d6cfc9-8m89r" [550228bd-69a1-4530-af98-0200cecdabf1] Running
	I0914 00:39:01.979168  874848 system_pods.go:89] "csi-hostpath-attacher-0" [cbc09b3c-e59c-4698-b6c7-f9d1746ab697] Running
	I0914 00:39:01.979173  874848 system_pods.go:89] "csi-hostpath-resizer-0" [1d0b01fe-048b-4b9e-82dd-5b408414180f] Running
	I0914 00:39:01.979178  874848 system_pods.go:89] "csi-hostpathplugin-mgx77" [456dedd2-11aa-43aa-8f21-e93340384161] Running
	I0914 00:39:01.979183  874848 system_pods.go:89] "etcd-addons-885748" [76fc0bec-b6e2-415d-8c2a-3bdb3f6bf113] Running
	I0914 00:39:01.979189  874848 system_pods.go:89] "kindnet-m55kx" [724646d8-f3df-4b7c-830a-ec84d16dc1c6] Running
	I0914 00:39:01.979194  874848 system_pods.go:89] "kube-apiserver-addons-885748" [c6447df2-c534-4e85-afc8-5da7d2435aa6] Running
	I0914 00:39:01.979199  874848 system_pods.go:89] "kube-controller-manager-addons-885748" [9727b4e8-1fa1-4175-b2ce-7bdd6ac0676c] Running
	I0914 00:39:01.979210  874848 system_pods.go:89] "kube-ingress-dns-minikube" [e6eb7e3a-203d-452a-b040-fbe431e6f08f] Running
	I0914 00:39:01.979215  874848 system_pods.go:89] "kube-proxy-dqs2h" [ad11d9fd-caaa-4026-86f8-aba3e5ac2834] Running
	I0914 00:39:01.979222  874848 system_pods.go:89] "kube-scheduler-addons-885748" [ae7fd70d-d206-474f-a967-53dc9227db19] Running
	I0914 00:39:01.979226  874848 system_pods.go:89] "metrics-server-84c5f94fbc-96xbg" [9c339307-23c2-46f3-af0b-9a4d12c82b32] Running
	I0914 00:39:01.979243  874848 system_pods.go:89] "nvidia-device-plugin-daemonset-9nphx" [8f3b2546-ef55-49b2-8f31-dd8f4ecdcf93] Running
	I0914 00:39:01.979273  874848 system_pods.go:89] "registry-66c9cd494c-bkhkl" [4d931f29-d87c-4bc8-8e58-88b441e56b0a] Running
	I0914 00:39:01.979280  874848 system_pods.go:89] "registry-proxy-fb2vb" [7d63ca1e-f5bf-47eb-84af-ebd01e9cd4b6] Running
	I0914 00:39:01.979284  874848 system_pods.go:89] "snapshot-controller-56fcc65765-8pfcj" [37872304-9181-40b4-8ebf-9958cdc3a7b0] Running
	I0914 00:39:01.979288  874848 system_pods.go:89] "snapshot-controller-56fcc65765-nwsdn" [bb956da0-8552-4d95-a92d-8a7311005caf] Running
	I0914 00:39:01.979292  874848 system_pods.go:89] "storage-provisioner" [c95fe42f-e257-4b52-ab42-54086f64f2e4] Running
	I0914 00:39:01.979298  874848 system_pods.go:126] duration metric: took 11.903645ms to wait for k8s-apps to be running ...
	I0914 00:39:01.979308  874848 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 00:39:01.979371  874848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:39:01.992649  874848 system_svc.go:56] duration metric: took 13.330968ms WaitForService to wait for kubelet
	I0914 00:39:01.992681  874848 kubeadm.go:582] duration metric: took 2m39.420274083s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:39:01.992702  874848 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:39:01.996886  874848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 00:39:01.996922  874848 node_conditions.go:123] node cpu capacity is 2
	I0914 00:39:01.996936  874848 node_conditions.go:105] duration metric: took 4.227243ms to run NodePressure ...
	I0914 00:39:01.996950  874848 start.go:241] waiting for startup goroutines ...
	I0914 00:39:01.996958  874848 start.go:246] waiting for cluster config update ...
	I0914 00:39:01.996976  874848 start.go:255] writing updated cluster config ...
	I0914 00:39:01.997319  874848 ssh_runner.go:195] Run: rm -f paused
	I0914 00:39:02.385531  874848 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 00:39:02.387155  874848 out.go:177] * Done! kubectl is now configured to use "addons-885748" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 00:51:18 addons-885748 crio[965]: time="2024-09-14 00:51:18.351112077Z" level=info msg="Removed pod sandbox: c6e0bd643d42b2df8c4b2f270557348cb2a30f1a596cf231b055de38619e4a37" id=e586e07e-1cbc-4bab-a271-dfec16349aef name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:51:18 addons-885748 crio[965]: time="2024-09-14 00:51:18.351577576Z" level=info msg="Stopping pod sandbox: 20383ba456212c4d8a5bfaf0c2400a36dfd9635e97d6a9443b32e8efd9a4095a" id=c4b5e2b4-7148-40d9-838e-a8b1a1f2b494 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:51:18 addons-885748 crio[965]: time="2024-09-14 00:51:18.351614498Z" level=info msg="Stopped pod sandbox (already stopped): 20383ba456212c4d8a5bfaf0c2400a36dfd9635e97d6a9443b32e8efd9a4095a" id=c4b5e2b4-7148-40d9-838e-a8b1a1f2b494 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:51:18 addons-885748 crio[965]: time="2024-09-14 00:51:18.351911648Z" level=info msg="Removing pod sandbox: 20383ba456212c4d8a5bfaf0c2400a36dfd9635e97d6a9443b32e8efd9a4095a" id=46f4ce11-9b8a-4927-8de9-5d5b1df8d94b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:51:18 addons-885748 crio[965]: time="2024-09-14 00:51:18.359878804Z" level=info msg="Removed pod sandbox: 20383ba456212c4d8a5bfaf0c2400a36dfd9635e97d6a9443b32e8efd9a4095a" id=46f4ce11-9b8a-4927-8de9-5d5b1df8d94b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:51:18 addons-885748 crio[965]: time="2024-09-14 00:51:18.360457694Z" level=info msg="Stopping pod sandbox: 03389f20052522d49b2986cd8f9b5a73a469340d215ded001a200b52825a86fa" id=6705faa0-729f-4eda-9ab1-e67798393bef name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:51:18 addons-885748 crio[965]: time="2024-09-14 00:51:18.360508286Z" level=info msg="Stopped pod sandbox (already stopped): 03389f20052522d49b2986cd8f9b5a73a469340d215ded001a200b52825a86fa" id=6705faa0-729f-4eda-9ab1-e67798393bef name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:51:18 addons-885748 crio[965]: time="2024-09-14 00:51:18.360804599Z" level=info msg="Removing pod sandbox: 03389f20052522d49b2986cd8f9b5a73a469340d215ded001a200b52825a86fa" id=6dda23c2-cecc-4aba-83e2-ad806adc0ece name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:51:18 addons-885748 crio[965]: time="2024-09-14 00:51:18.367948588Z" level=info msg="Removed pod sandbox: 03389f20052522d49b2986cd8f9b5a73a469340d215ded001a200b52825a86fa" id=6dda23c2-cecc-4aba-83e2-ad806adc0ece name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:51:19 addons-885748 crio[965]: time="2024-09-14 00:51:19.812941574Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6b84fa04-8c8a-4cfb-8e2f-b03cb10c984e name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:51:19 addons-885748 crio[965]: time="2024-09-14 00:51:19.813164067Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6b84fa04-8c8a-4cfb-8e2f-b03cb10c984e name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:51:19 addons-885748 crio[965]: time="2024-09-14 00:51:19.978492888Z" level=warning msg="Stopping container 11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=2104a71a-66eb-4320-a480-00b523fdae79 name=/runtime.v1.RuntimeService/StopContainer
	Sep 14 00:51:20 addons-885748 conmon[5033]: conmon 11df4840b3ccdb0fc7b8 <ninfo>: container 5044 exited with status 137
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.118022398Z" level=info msg="Stopped container 11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348: ingress-nginx/ingress-nginx-controller-bc57996ff-h95vr/controller" id=2104a71a-66eb-4320-a480-00b523fdae79 name=/runtime.v1.RuntimeService/StopContainer
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.118769400Z" level=info msg="Stopping pod sandbox: 08cadfc75ed5b8d5f712b2d09326eaf2f3b8ef8aac6734c6ff4179d6343dc336" id=4a65ac72-3595-4219-9071-931622dac3d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.122415825Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-YS45LBG2M2YWNQXH - [0:0]\n:KUBE-HP-LU3ZPJD7257LEFSH - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-LU3ZPJD7257LEFSH\n-X KUBE-HP-YS45LBG2M2YWNQXH\nCOMMIT\n"
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.136023598Z" level=info msg="Closing host port tcp:80"
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.136077464Z" level=info msg="Closing host port tcp:443"
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.137612187Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.137644818Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.137822832Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-h95vr Namespace:ingress-nginx ID:08cadfc75ed5b8d5f712b2d09326eaf2f3b8ef8aac6734c6ff4179d6343dc336 UID:d369817c-ba09-480a-b6ac-1ca34cdb1eb2 NetNS:/var/run/netns/83227642-c675-413e-9728-2b1de634238a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.137961471Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-h95vr from CNI network \"kindnet\" (type=ptp)"
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.170963131Z" level=info msg="Stopped pod sandbox: 08cadfc75ed5b8d5f712b2d09326eaf2f3b8ef8aac6734c6ff4179d6343dc336" id=4a65ac72-3595-4219-9071-931622dac3d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.305138320Z" level=info msg="Removing container: 11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348" id=69521e10-816d-45c7-8efa-477fd3170398 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 14 00:51:20 addons-885748 crio[965]: time="2024-09-14 00:51:20.318473837Z" level=info msg="Removed container 11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348: ingress-nginx/ingress-nginx-controller-bc57996ff-h95vr/controller" id=69521e10-816d-45c7-8efa-477fd3170398 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4bb1da24cf139       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   9 seconds ago       Running             hello-world-app           0                   68d584bcdc7d1       hello-world-app-55bf9c44b4-d9r78
	3b8982f463ba7       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                         2 minutes ago       Running             nginx                     0                   a5764dafff354       nginx
	fc0328f66b9e0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            13 minutes ago      Running             gcp-auth                  0                   d7b70729e47d5       gcp-auth-89d5ffd79-frj5t
	8091d19cac440       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        13 minutes ago      Running             local-path-provisioner    0                   1220f396bbf80       local-path-provisioner-86d989889c-dlghs
	7d56766635b73       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   14 minutes ago      Running             metrics-server            0                   bb338c2f32bcd       metrics-server-84c5f94fbc-96xbg
	80e8332c931e9       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        14 minutes ago      Running             coredns                   0                   249d9842b4544       coredns-7c65d6cfc9-8m89r
	ebb2e7bdbbfd4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        14 minutes ago      Running             storage-provisioner       0                   105d379cff026       storage-provisioner
	56f7319a8a8d6       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        14 minutes ago      Running             kindnet-cni               0                   0c72d454012fd       kindnet-m55kx
	a47b8e8869ee8       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        15 minutes ago      Running             kube-proxy                0                   ffefb18074c57       kube-proxy-dqs2h
	48d812ac2652a       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        15 minutes ago      Running             kube-apiserver            0                   828ea1cf2ba92       kube-apiserver-addons-885748
	f3056a13deffd       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        15 minutes ago      Running             kube-controller-manager   0                   cc2cb3c49ab23       kube-controller-manager-addons-885748
	d793e5939094c       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        15 minutes ago      Running             kube-scheduler            0                   8aac50a11aa1f       kube-scheduler-addons-885748
	f8b9a437608b9       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        15 minutes ago      Running             etcd                      0                   fecaa719a39f6       etcd-addons-885748
	
	
	==> coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] <==
	[INFO] 10.244.0.12:33235 - 38053 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000139427s
	[INFO] 10.244.0.12:39174 - 20223 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002168352s
	[INFO] 10.244.0.12:39174 - 34553 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001903267s
	[INFO] 10.244.0.12:55989 - 9515 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000126611s
	[INFO] 10.244.0.12:55989 - 28949 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008036s
	[INFO] 10.244.0.12:44725 - 25596 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000196631s
	[INFO] 10.244.0.12:44725 - 58609 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000336263s
	[INFO] 10.244.0.12:37024 - 61418 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000840966s
	[INFO] 10.244.0.12:37024 - 33000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112703s
	[INFO] 10.244.0.12:43586 - 62400 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096252s
	[INFO] 10.244.0.12:43586 - 21956 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000270361s
	[INFO] 10.244.0.12:39958 - 65451 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001701713s
	[INFO] 10.244.0.12:39958 - 44969 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001315145s
	[INFO] 10.244.0.12:45882 - 11582 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000106755s
	[INFO] 10.244.0.12:45882 - 57120 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000056901s
	[INFO] 10.244.0.20:40467 - 22508 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0009463s
	[INFO] 10.244.0.20:47828 - 50659 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016816s
	[INFO] 10.244.0.20:56008 - 60050 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000270443s
	[INFO] 10.244.0.20:57451 - 45764 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000274734s
	[INFO] 10.244.0.20:45104 - 4965 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000161744s
	[INFO] 10.244.0.20:37823 - 38164 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000285663s
	[INFO] 10.244.0.20:38730 - 54617 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003590072s
	[INFO] 10.244.0.20:48720 - 43288 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003891685s
	[INFO] 10.244.0.20:50211 - 8144 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002567145s
	[INFO] 10.244.0.20:33394 - 12183 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002678577s
	
	
	==> describe nodes <==
	Name:               addons-885748
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-885748
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=addons-885748
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_36_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-885748
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:36:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-885748
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:51:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:49:24 +0000   Sat, 14 Sep 2024 00:36:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:49:24 +0000   Sat, 14 Sep 2024 00:36:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:49:24 +0000   Sat, 14 Sep 2024 00:36:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:49:24 +0000   Sat, 14 Sep 2024 00:37:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-885748
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 4359fb52d09b48a99b9422f7ed1aab10
	  System UUID:                97520139-af6f-4519-ad5d-f1e74ef171eb
	  Boot ID:                    fb6d1488-4ff6-49a9-b7dc-0ab0c636005f
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-d9r78           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gcp-auth                    gcp-auth-89d5ffd79-frj5t                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-8m89r                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-addons-885748                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-m55kx                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-addons-885748               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-885748      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-dqs2h                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-885748               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-96xbg            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-dlghs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 15m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m   kubelet          Node addons-885748 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m   kubelet          Node addons-885748 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m   kubelet          Node addons-885748 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m   node-controller  Node addons-885748 event: Registered Node addons-885748 in Controller
	  Normal   NodeReady                14m   kubelet          Node addons-885748 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] <==
	{"level":"info","ts":"2024-09-14T00:36:11.809593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.809640Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.809693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.809738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.812081Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-885748 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:36:11.812295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:36:11.813737Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:11.813914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:36:11.816988Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:36:11.817279Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:36:11.817309Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:36:11.817872Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:36:11.818699Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:36:11.819114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-14T00:36:11.819222Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:11.821360Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:11.821444Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:23.269674Z","caller":"traceutil/trace.go:171","msg":"trace[1517059370] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"215.568054ms","start":"2024-09-14T00:36:23.054082Z","end":"2024-09-14T00:36:23.269650Z","steps":["trace[1517059370] 'process raft request'  (duration: 116.127873ms)","trace[1517059370] 'compare'  (duration: 99.327166ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T00:38:00.269469Z","caller":"traceutil/trace.go:171","msg":"trace[1121860251] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"101.073361ms","start":"2024-09-14T00:38:00.168377Z","end":"2024-09-14T00:38:00.269450Z","steps":["trace[1121860251] 'process raft request'  (duration: 92.406917ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:46:12.426019Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-09-14T00:46:12.459407Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"32.859233ms","hash":3731172354,"current-db-size-bytes":6463488,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3293184,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-14T00:46:12.459464Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3731172354,"revision":1514,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T00:51:12.431436Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1932}
	{"level":"info","ts":"2024-09-14T00:51:12.448360Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1932,"took":"16.336576ms","hash":2051484435,"current-db-size-bytes":6463488,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":4550656,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-14T00:51:12.448406Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2051484435,"revision":1932,"compact-revision":1514}
	
	
	==> gcp-auth [fc0328f66b9e0b6021b961c2ff50a7c98d37c2056b93b1910e5cac7120024106] <==
	2024/09/14 00:39:02 Ready to write response ...
	2024/09/14 00:39:02 Ready to marshal response ...
	2024/09/14 00:39:02 Ready to write response ...
	2024/09/14 00:47:16 Ready to marshal response ...
	2024/09/14 00:47:16 Ready to write response ...
	2024/09/14 00:47:20 Ready to marshal response ...
	2024/09/14 00:47:20 Ready to write response ...
	2024/09/14 00:47:42 Ready to marshal response ...
	2024/09/14 00:47:42 Ready to write response ...
	2024/09/14 00:48:17 Ready to marshal response ...
	2024/09/14 00:48:17 Ready to write response ...
	2024/09/14 00:48:18 Ready to marshal response ...
	2024/09/14 00:48:18 Ready to write response ...
	2024/09/14 00:48:25 Ready to marshal response ...
	2024/09/14 00:48:25 Ready to write response ...
	2024/09/14 00:48:26 Ready to marshal response ...
	2024/09/14 00:48:26 Ready to write response ...
	2024/09/14 00:48:27 Ready to marshal response ...
	2024/09/14 00:48:27 Ready to write response ...
	2024/09/14 00:48:27 Ready to marshal response ...
	2024/09/14 00:48:27 Ready to write response ...
	2024/09/14 00:48:54 Ready to marshal response ...
	2024/09/14 00:48:54 Ready to write response ...
	2024/09/14 00:51:14 Ready to marshal response ...
	2024/09/14 00:51:14 Ready to write response ...
	
	
	==> kernel <==
	 00:51:25 up  4:33,  0 users,  load average: 0.50, 0.59, 1.68
	Linux addons-885748 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] <==
	I0914 00:49:16.815647       1 main.go:299] handling current node
	I0914 00:49:26.815070       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:49:26.815110       1 main.go:299] handling current node
	I0914 00:49:36.819429       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:49:36.819465       1 main.go:299] handling current node
	I0914 00:49:46.821336       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:49:46.821368       1 main.go:299] handling current node
	I0914 00:49:56.820448       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:49:56.820556       1 main.go:299] handling current node
	I0914 00:50:06.818969       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:50:06.819001       1 main.go:299] handling current node
	I0914 00:50:16.814899       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:50:16.815043       1 main.go:299] handling current node
	I0914 00:50:26.815354       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:50:26.815484       1 main.go:299] handling current node
	I0914 00:50:36.814574       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:50:36.814609       1 main.go:299] handling current node
	I0914 00:50:46.817916       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:50:46.817952       1 main.go:299] handling current node
	I0914 00:50:56.818083       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:50:56.818119       1 main.go:299] handling current node
	I0914 00:51:06.822381       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:51:06.822416       1 main.go:299] handling current node
	I0914 00:51:16.815434       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:51:16.815473       1 main.go:299] handling current node
	
	
	==> kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0914 00:38:28.081595       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.50.17:443: connect: connection refused" logger="UnhandledError"
	E0914 00:38:28.087158       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.50.17:443: connect: connection refused" logger="UnhandledError"
	I0914 00:38:28.202850       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 00:47:30.583195       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0914 00:47:58.487976       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.488113       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.515016       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.515142       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.531465       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.531532       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.560654       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.560914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.650738       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.651626       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0914 00:47:59.605910       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0914 00:47:59.652010       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 00:47:59.748780       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0914 00:48:26.968761       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.170.149"}
	I0914 00:48:48.270011       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0914 00:48:49.312565       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0914 00:48:53.824100       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0914 00:48:54.148868       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.26.185"}
	I0914 00:51:14.321212       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.7.199"}
	
	
	==> kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] <==
	W0914 00:50:01.304168       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:50:01.304214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:50:17.894330       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:50:17.894381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:50:23.235995       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:50:23.236039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:50:35.998043       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:50:35.998101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:50:41.562488       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:50:41.562533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:50:54.441358       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:50:54.441398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 00:51:14.069137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="43.735723ms"
	I0914 00:51:14.083878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.151341ms"
	I0914 00:51:14.102939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="18.923745ms"
	I0914 00:51:14.103119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="54.522µs"
	W0914 00:51:14.864740       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:51:14.864786       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 00:51:16.315288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.386453ms"
	I0914 00:51:16.315540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.375µs"
	I0914 00:51:16.942209       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0914 00:51:16.944786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.655µs"
	I0914 00:51:16.950501       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0914 00:51:24.333887       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:51:24.333928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] <==
	I0914 00:36:28.440930       1 server_linux.go:66] "Using iptables proxy"
	I0914 00:36:28.793093       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0914 00:36:28.800749       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:36:28.881787       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 00:36:28.881853       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:36:28.885486       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:36:28.886040       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:36:28.886064       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:36:28.895020       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:36:28.895563       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:36:28.895926       1 config.go:199] "Starting service config controller"
	I0914 00:36:28.895990       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:36:28.896039       1 config.go:328] "Starting node config controller"
	I0914 00:36:28.896085       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:36:28.997049       1 shared_informer.go:320] Caches are synced for node config
	I0914 00:36:28.997094       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:36:28.997136       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] <==
	W0914 00:36:15.114600       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 00:36:15.115262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:15.991953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 00:36:15.991999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.008831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 00:36:16.008904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.013935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 00:36:16.013978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.023529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 00:36:16.023578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.064172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 00:36:16.064214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.099965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 00:36:16.100011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.138383       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 00:36:16.138445       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 00:36:16.186067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 00:36:16.186184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.187287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 00:36:16.187385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.203065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 00:36:16.203112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.215830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 00:36:16.215921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0914 00:36:18.915639       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 00:51:15 addons-885748 kubelet[1502]: I0914 00:51:15.441330    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvz5t\" (UniqueName: \"kubernetes.io/projected/e6eb7e3a-203d-452a-b040-fbe431e6f08f-kube-api-access-pvz5t\") pod \"e6eb7e3a-203d-452a-b040-fbe431e6f08f\" (UID: \"e6eb7e3a-203d-452a-b040-fbe431e6f08f\") "
	Sep 14 00:51:15 addons-885748 kubelet[1502]: I0914 00:51:15.447373    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6eb7e3a-203d-452a-b040-fbe431e6f08f-kube-api-access-pvz5t" (OuterVolumeSpecName: "kube-api-access-pvz5t") pod "e6eb7e3a-203d-452a-b040-fbe431e6f08f" (UID: "e6eb7e3a-203d-452a-b040-fbe431e6f08f"). InnerVolumeSpecName "kube-api-access-pvz5t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 00:51:15 addons-885748 kubelet[1502]: I0914 00:51:15.541868    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pvz5t\" (UniqueName: \"kubernetes.io/projected/e6eb7e3a-203d-452a-b040-fbe431e6f08f-kube-api-access-pvz5t\") on node \"addons-885748\" DevicePath \"\""
	Sep 14 00:51:16 addons-885748 kubelet[1502]: I0914 00:51:16.292549    1502 scope.go:117] "RemoveContainer" containerID="4c38221755a8174705e39ce855b8f90824f9dad246093f82ec41bf4e8d808a49"
	Sep 14 00:51:16 addons-885748 kubelet[1502]: I0914 00:51:16.330273    1502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-d9r78" podStartSLOduration=1.183802847 podStartE2EDuration="2.330251298s" podCreationTimestamp="2024-09-14 00:51:14 +0000 UTC" firstStartedPulling="2024-09-14 00:51:14.436766841 +0000 UTC m=+896.768469855" lastFinishedPulling="2024-09-14 00:51:15.583215292 +0000 UTC m=+897.914918306" observedRunningTime="2024-09-14 00:51:16.307012015 +0000 UTC m=+898.638715037" watchObservedRunningTime="2024-09-14 00:51:16.330251298 +0000 UTC m=+898.661954312"
	Sep 14 00:51:17 addons-885748 kubelet[1502]: I0914 00:51:17.814825    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e281143-bf5f-44f0-b8e8-9bac712c0ee5" path="/var/lib/kubelet/pods/0e281143-bf5f-44f0-b8e8-9bac712c0ee5/volumes"
	Sep 14 00:51:17 addons-885748 kubelet[1502]: I0914 00:51:17.815217    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f3e508-0e68-43c6-ab43-6994c9da340a" path="/var/lib/kubelet/pods/e1f3e508-0e68-43c6-ab43-6994c9da340a/volumes"
	Sep 14 00:51:17 addons-885748 kubelet[1502]: I0914 00:51:17.815585    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6eb7e3a-203d-452a-b040-fbe431e6f08f" path="/var/lib/kubelet/pods/e6eb7e3a-203d-452a-b040-fbe431e6f08f/volumes"
	Sep 14 00:51:17 addons-885748 kubelet[1502]: E0914 00:51:17.878536    1502 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a, memory: /docker/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/system.slice/kubelet.service"
	Sep 14 00:51:18 addons-885748 kubelet[1502]: E0914 00:51:18.186763    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275078186526935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:51:18 addons-885748 kubelet[1502]: E0914 00:51:18.186798    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275078186526935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:51:18 addons-885748 kubelet[1502]: I0914 00:51:18.308903    1502 scope.go:117] "RemoveContainer" containerID="826b0a5e7c152a1f27949cdd2a57d3b49348e7c230d7a60d153ae8eb2d22b468"
	Sep 14 00:51:18 addons-885748 kubelet[1502]: I0914 00:51:18.323333    1502 scope.go:117] "RemoveContainer" containerID="8333c07f8b12c0f7a45618e6d90c7f105c8d6448a64f34ef1cfa334d0b86166f"
	Sep 14 00:51:19 addons-885748 kubelet[1502]: E0914 00:51:19.813433    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="64665afd-5894-48bd-a4bb-693ba380ced0"
	Sep 14 00:51:20 addons-885748 kubelet[1502]: I0914 00:51:20.272802    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d369817c-ba09-480a-b6ac-1ca34cdb1eb2-webhook-cert\") pod \"d369817c-ba09-480a-b6ac-1ca34cdb1eb2\" (UID: \"d369817c-ba09-480a-b6ac-1ca34cdb1eb2\") "
	Sep 14 00:51:20 addons-885748 kubelet[1502]: I0914 00:51:20.272871    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh9gk\" (UniqueName: \"kubernetes.io/projected/d369817c-ba09-480a-b6ac-1ca34cdb1eb2-kube-api-access-vh9gk\") pod \"d369817c-ba09-480a-b6ac-1ca34cdb1eb2\" (UID: \"d369817c-ba09-480a-b6ac-1ca34cdb1eb2\") "
	Sep 14 00:51:20 addons-885748 kubelet[1502]: I0914 00:51:20.274976    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d369817c-ba09-480a-b6ac-1ca34cdb1eb2-kube-api-access-vh9gk" (OuterVolumeSpecName: "kube-api-access-vh9gk") pod "d369817c-ba09-480a-b6ac-1ca34cdb1eb2" (UID: "d369817c-ba09-480a-b6ac-1ca34cdb1eb2"). InnerVolumeSpecName "kube-api-access-vh9gk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 00:51:20 addons-885748 kubelet[1502]: I0914 00:51:20.275562    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d369817c-ba09-480a-b6ac-1ca34cdb1eb2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d369817c-ba09-480a-b6ac-1ca34cdb1eb2" (UID: "d369817c-ba09-480a-b6ac-1ca34cdb1eb2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 14 00:51:20 addons-885748 kubelet[1502]: I0914 00:51:20.303361    1502 scope.go:117] "RemoveContainer" containerID="11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348"
	Sep 14 00:51:20 addons-885748 kubelet[1502]: I0914 00:51:20.318725    1502 scope.go:117] "RemoveContainer" containerID="11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348"
	Sep 14 00:51:20 addons-885748 kubelet[1502]: E0914 00:51:20.319133    1502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348\": container with ID starting with 11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348 not found: ID does not exist" containerID="11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348"
	Sep 14 00:51:20 addons-885748 kubelet[1502]: I0914 00:51:20.319172    1502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348"} err="failed to get container status \"11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348\": rpc error: code = NotFound desc = could not find container \"11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348\": container with ID starting with 11df4840b3ccdb0fc7b80314305812d745c3fd9bdebda85fba223eb7434de348 not found: ID does not exist"
	Sep 14 00:51:20 addons-885748 kubelet[1502]: I0914 00:51:20.373855    1502 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d369817c-ba09-480a-b6ac-1ca34cdb1eb2-webhook-cert\") on node \"addons-885748\" DevicePath \"\""
	Sep 14 00:51:20 addons-885748 kubelet[1502]: I0914 00:51:20.373889    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vh9gk\" (UniqueName: \"kubernetes.io/projected/d369817c-ba09-480a-b6ac-1ca34cdb1eb2-kube-api-access-vh9gk\") on node \"addons-885748\" DevicePath \"\""
	Sep 14 00:51:21 addons-885748 kubelet[1502]: I0914 00:51:21.814679    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d369817c-ba09-480a-b6ac-1ca34cdb1eb2" path="/var/lib/kubelet/pods/d369817c-ba09-480a-b6ac-1ca34cdb1eb2/volumes"
	
	
	==> storage-provisioner [ebb2e7bdbbfd4e15a1df8147f8ab8e288ada7a8b4fb1482db8fd01effcb11eef] <==
	I0914 00:37:07.958850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 00:37:07.973989       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 00:37:07.974042       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 00:37:07.989411       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 00:37:07.989599       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-885748_49a32771-25bb-4138-a77d-2ab9ae6fe758!
	I0914 00:37:07.993894       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1f3463c-ac9e-45b9-aadc-bdd81184edd4", APIVersion:"v1", ResourceVersion:"870", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-885748_49a32771-25bb-4138-a77d-2ab9ae6fe758 became leader
	I0914 00:37:08.090873       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-885748_49a32771-25bb-4138-a77d-2ab9ae6fe758!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-885748 -n addons-885748
helpers_test.go:261: (dbg) Run:  kubectl --context addons-885748 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-885748 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-885748 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-885748/192.168.49.2
	Start Time:       Sat, 14 Sep 2024 00:39:02 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9c6mj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9c6mj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  12m                   default-scheduler  Successfully assigned default/busybox to addons-885748
	  Normal   Pulling    10m (x4 over 12m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 12m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 12m)     kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 12m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m19s (x43 over 12m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.11s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (348.43s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 6.000071ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-96xbg" [9c339307-23c2-46f3-af0b-9a4d12c82b32] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003776144s
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (91.760738ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 12m9.374660408s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (89.510128ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 12m11.323127546s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (86.890823ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 12m17.638168324s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (90.42658ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 12m25.945183146s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (96.892376ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 12m39.08349473s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (84.860547ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 12m53.336362907s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (98.07715ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 13m5.187062447s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (100.267846ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 13m26.751991453s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (83.203175ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 14m37.913535751s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (89.189948ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 15m15.497545874s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (85.453078ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 16m43.397381597s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-885748 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-885748 top pods -n kube-system: exit status 1 (86.027999ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-8m89r, age: 17m48.428279488s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-885748
helpers_test.go:235: (dbg) docker inspect addons-885748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a",
	        "Created": "2024-09-14T00:35:51.693021132Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 875338,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-14T00:35:51.852610858Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fe3365929e6ce54b4c06f0bc3d1500dff08f535844ef4978f2c45cd67c542134",
	        "ResolvConfPath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/hostname",
	        "HostsPath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/hosts",
	        "LogPath": "/var/lib/docker/containers/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a/16a9106e2bf9c3c6117026b8b9450dd89d4df45737783141ee7ef5e3ae26f52a-json.log",
	        "Name": "/addons-885748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-885748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-885748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee-init/diff:/var/lib/docker/overlay2/75b2121147f32424fffc5e50d2609c96cf2fdc411273d8660afbb09b8a3ad07a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/717d7a298ae509566e8dbdb01cff4c48236f098b7e269390bd91dbe10c1fbeee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-885748",
	                "Source": "/var/lib/docker/volumes/addons-885748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-885748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-885748",
	                "name.minikube.sigs.k8s.io": "addons-885748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1a2274d2fe074b454d8fc13c1575d8f017a8d3113ed94af95faf9d1d2583971",
	            "SandboxKey": "/var/run/docker/netns/b1a2274d2fe0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33564"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33565"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33568"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33566"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33567"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-885748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c1a0d21fd124d60633c329f0674dc6666a0292fe6f6b1be172c6bb2b7fa6a718",
	                    "EndpointID": "ce9472c43f8e5b4bcc4e1fe669f69274e4050166515b932738a2ad8472c5184d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-885748",
	                        "16a9106e2bf9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-885748 -n addons-885748
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-885748 logs -n 25: (1.501566754s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-396021                                                                     | download-only-396021   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| start   | --download-only -p                                                                          | download-docker-830102 | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | download-docker-830102                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-830102                                                                   | download-docker-830102 | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-918324   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | binary-mirror-918324                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44679                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-918324                                                                     | binary-mirror-918324   | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| addons  | disable dashboard -p                                                                        | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | addons-885748                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | addons-885748                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-885748 --wait=true                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-885748 addons                                                                        | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:47 UTC | 14 Sep 24 00:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-885748 addons                                                                        | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:47 UTC | 14 Sep 24 00:47 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-885748 ip                                                                            | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | -p addons-885748                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-885748 ssh cat                                                                       | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | /opt/local-path-provisioner/pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | addons-885748                                                                               |                        |         |         |                     |                     |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | -p addons-885748                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:48 UTC | 14 Sep 24 00:48 UTC |
	|         | addons-885748                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-885748 ssh curl -s                                                                   | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:49 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-885748 ip                                                                            | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:51 UTC | 14 Sep 24 00:51 UTC |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:51 UTC | 14 Sep 24 00:51 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-885748 addons disable                                                                | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:51 UTC | 14 Sep 24 00:51 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-885748 addons                                                                        | addons-885748          | jenkins | v1.34.0 | 14 Sep 24 00:54 UTC | 14 Sep 24 00:54 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:35:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:35:27.648597  874848 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:35:27.648788  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:27.648825  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:35:27.648839  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:27.649116  874848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 00:35:27.649628  874848 out.go:352] Setting JSON to false
	I0914 00:35:27.650620  874848 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15472,"bootTime":1726258656,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0914 00:35:27.650703  874848 start.go:139] virtualization:  
	I0914 00:35:27.652331  874848 out.go:177] * [addons-885748] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 00:35:27.654538  874848 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:35:27.654641  874848 notify.go:220] Checking for updates...
	I0914 00:35:27.657216  874848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:35:27.658569  874848 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 00:35:27.659690  874848 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	I0914 00:35:27.661085  874848 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 00:35:27.662124  874848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:35:27.663629  874848 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:35:27.685055  874848 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:35:27.685194  874848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:27.743708  874848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:35:27.734595728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:27.743818  874848 docker.go:318] overlay module found
	I0914 00:35:27.746292  874848 out.go:177] * Using the docker driver based on user configuration
	I0914 00:35:27.747376  874848 start.go:297] selected driver: docker
	I0914 00:35:27.747391  874848 start.go:901] validating driver "docker" against <nil>
	I0914 00:35:27.747405  874848 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:35:27.748035  874848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:27.802291  874848 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:35:27.792988752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:27.802504  874848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:35:27.802746  874848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:35:27.803994  874848 out.go:177] * Using Docker driver with root privileges
	I0914 00:35:27.804986  874848 cni.go:84] Creating CNI manager for ""
	I0914 00:35:27.805046  874848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:35:27.805057  874848 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 00:35:27.805150  874848 start.go:340] cluster config:
	{Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:35:27.807346  874848 out.go:177] * Starting "addons-885748" primary control-plane node in "addons-885748" cluster
	I0914 00:35:27.808476  874848 cache.go:121] Beginning downloading kic base image for docker with crio
	I0914 00:35:27.809606  874848 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0914 00:35:27.810871  874848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:27.810920  874848 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0914 00:35:27.810932  874848 cache.go:56] Caching tarball of preloaded images
	I0914 00:35:27.810960  874848 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 00:35:27.811021  874848 preload.go:172] Found /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 00:35:27.811031  874848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 00:35:27.811392  874848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/config.json ...
	I0914 00:35:27.811450  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/config.json: {Name:mk574a8eb9ef8f9e3b261644b0ca0e71c6fc48e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:27.826453  874848 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 00:35:27.826558  874848 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 00:35:27.826581  874848 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 00:35:27.826586  874848 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 00:35:27.826598  874848 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 00:35:27.826604  874848 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0914 00:35:44.803607  874848 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0914 00:35:44.803643  874848 cache.go:194] Successfully downloaded all kic artifacts
	I0914 00:35:44.803673  874848 start.go:360] acquireMachinesLock for addons-885748: {Name:mk9ddda16eaf26a40c295d659f1e42acd6143125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:35:44.803799  874848 start.go:364] duration metric: took 104.539µs to acquireMachinesLock for "addons-885748"
	I0914 00:35:44.803830  874848 start.go:93] Provisioning new machine with config: &{Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:35:44.803926  874848 start.go:125] createHost starting for "" (driver="docker")
	I0914 00:35:44.805508  874848 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0914 00:35:44.805767  874848 start.go:159] libmachine.API.Create for "addons-885748" (driver="docker")
	I0914 00:35:44.805803  874848 client.go:168] LocalClient.Create starting
	I0914 00:35:44.805931  874848 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem
	I0914 00:35:45.234194  874848 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem
	I0914 00:35:45.623675  874848 cli_runner.go:164] Run: docker network inspect addons-885748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 00:35:45.638888  874848 cli_runner.go:211] docker network inspect addons-885748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 00:35:45.638974  874848 network_create.go:284] running [docker network inspect addons-885748] to gather additional debugging logs...
	I0914 00:35:45.638996  874848 cli_runner.go:164] Run: docker network inspect addons-885748
	W0914 00:35:45.653957  874848 cli_runner.go:211] docker network inspect addons-885748 returned with exit code 1
	I0914 00:35:45.653988  874848 network_create.go:287] error running [docker network inspect addons-885748]: docker network inspect addons-885748: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-885748 not found
	I0914 00:35:45.654007  874848 network_create.go:289] output of [docker network inspect addons-885748]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-885748 not found
	
	** /stderr **
	I0914 00:35:45.654106  874848 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 00:35:45.672611  874848 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400048fee0}
	I0914 00:35:45.672659  874848 network_create.go:124] attempt to create docker network addons-885748 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 00:35:45.672715  874848 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-885748 addons-885748
	I0914 00:35:45.738461  874848 network_create.go:108] docker network addons-885748 192.168.49.0/24 created
	I0914 00:35:45.738494  874848 kic.go:121] calculated static IP "192.168.49.2" for the "addons-885748" container
	I0914 00:35:45.738570  874848 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 00:35:45.753096  874848 cli_runner.go:164] Run: docker volume create addons-885748 --label name.minikube.sigs.k8s.io=addons-885748 --label created_by.minikube.sigs.k8s.io=true
	I0914 00:35:45.768446  874848 oci.go:103] Successfully created a docker volume addons-885748
	I0914 00:35:45.768544  874848 cli_runner.go:164] Run: docker run --rm --name addons-885748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885748 --entrypoint /usr/bin/test -v addons-885748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib
	I0914 00:35:47.532910  874848 cli_runner.go:217] Completed: docker run --rm --name addons-885748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885748 --entrypoint /usr/bin/test -v addons-885748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib: (1.764328482s)
	I0914 00:35:47.532939  874848 oci.go:107] Successfully prepared a docker volume addons-885748
	I0914 00:35:47.532965  874848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:47.532986  874848 kic.go:194] Starting extracting preloaded images to volume ...
	I0914 00:35:47.533050  874848 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-885748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 00:35:51.627808  874848 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-885748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir: (4.094718795s)
	I0914 00:35:51.627842  874848 kic.go:203] duration metric: took 4.094852633s to extract preloaded images to volume ...
	W0914 00:35:51.627991  874848 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 00:35:51.628114  874848 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 00:35:51.679472  874848 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-885748 --name addons-885748 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885748 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-885748 --network addons-885748 --ip 192.168.49.2 --volume addons-885748:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243
	I0914 00:35:52.026413  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Running}}
	I0914 00:35:52.054130  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:35:52.081150  874848 cli_runner.go:164] Run: docker exec addons-885748 stat /var/lib/dpkg/alternatives/iptables
	I0914 00:35:52.151645  874848 oci.go:144] the created container "addons-885748" has a running status.
	I0914 00:35:52.151674  874848 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa...
	I0914 00:35:52.411723  874848 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 00:35:52.437353  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:35:52.459127  874848 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 00:35:52.459149  874848 kic_runner.go:114] Args: [docker exec --privileged addons-885748 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 00:35:52.535444  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:35:52.561504  874848 machine.go:93] provisionDockerMachine start ...
	I0914 00:35:52.561596  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:52.592426  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:52.592702  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:52.592718  874848 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 00:35:52.593577  874848 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48640->127.0.0.1:33564: read: connection reset by peer
	I0914 00:35:55.712678  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-885748
	
	I0914 00:35:55.712704  874848 ubuntu.go:169] provisioning hostname "addons-885748"
	I0914 00:35:55.712793  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:55.730083  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:55.730330  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:55.730355  874848 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-885748 && echo "addons-885748" | sudo tee /etc/hostname
	I0914 00:35:55.863937  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-885748
	
	I0914 00:35:55.864025  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:55.884479  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:55.884728  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:55.884753  874848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-885748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-885748/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-885748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:35:56.006206  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:35:56.006299  874848 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-868698/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-868698/.minikube}
	I0914 00:35:56.006369  874848 ubuntu.go:177] setting up certificates
	I0914 00:35:56.006397  874848 provision.go:84] configureAuth start
	I0914 00:35:56.006497  874848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885748
	I0914 00:35:56.025656  874848 provision.go:143] copyHostCerts
	I0914 00:35:56.025744  874848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem (1078 bytes)
	I0914 00:35:56.025874  874848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem (1123 bytes)
	I0914 00:35:56.025946  874848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem (1679 bytes)
	I0914 00:35:56.026001  874848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem org=jenkins.addons-885748 san=[127.0.0.1 192.168.49.2 addons-885748 localhost minikube]
	I0914 00:35:56.397039  874848 provision.go:177] copyRemoteCerts
	I0914 00:35:56.397111  874848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:35:56.397152  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.413576  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:56.502071  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 00:35:56.525597  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:35:56.549087  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 00:35:56.572443  874848 provision.go:87] duration metric: took 566.020273ms to configureAuth
	I0914 00:35:56.572469  874848 ubuntu.go:193] setting minikube options for container-runtime
	I0914 00:35:56.572641  874848 config.go:182] Loaded profile config "addons-885748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:35:56.572750  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.589020  874848 main.go:141] libmachine: Using SSH client type: native
	I0914 00:35:56.589468  874848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33564 <nil> <nil>}
	I0914 00:35:56.589494  874848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 00:35:56.813689  874848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 00:35:56.813714  874848 machine.go:96] duration metric: took 4.252187622s to provisionDockerMachine
	I0914 00:35:56.813724  874848 client.go:171] duration metric: took 12.007912s to LocalClient.Create
	I0914 00:35:56.813737  874848 start.go:167] duration metric: took 12.007978992s to libmachine.API.Create "addons-885748"
	I0914 00:35:56.813745  874848 start.go:293] postStartSetup for "addons-885748" (driver="docker")
	I0914 00:35:56.813756  874848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:35:56.813824  874848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:35:56.813884  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.830802  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:56.918469  874848 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:35:56.921566  874848 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 00:35:56.921600  874848 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 00:35:56.921611  874848 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 00:35:56.921619  874848 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 00:35:56.921629  874848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/addons for local assets ...
	I0914 00:35:56.921700  874848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/files for local assets ...
	I0914 00:35:56.921730  874848 start.go:296] duration metric: took 107.979103ms for postStartSetup
	I0914 00:35:56.922050  874848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885748
	I0914 00:35:56.937996  874848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/config.json ...
	I0914 00:35:56.938300  874848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:35:56.938349  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:56.957478  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:57.042229  874848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 00:35:57.047056  874848 start.go:128] duration metric: took 12.243112242s to createHost
	I0914 00:35:57.047078  874848 start.go:83] releasing machines lock for "addons-885748", held for 12.243266454s
	I0914 00:35:57.047155  874848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885748
	I0914 00:35:57.063313  874848 ssh_runner.go:195] Run: cat /version.json
	I0914 00:35:57.063378  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:57.063655  874848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:35:57.063724  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:35:57.084371  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:57.094261  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:35:57.308581  874848 ssh_runner.go:195] Run: systemctl --version
	I0914 00:35:57.312939  874848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 00:35:57.451620  874848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 00:35:57.455973  874848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:35:57.477002  874848 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 00:35:57.477132  874848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:35:57.511110  874848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 00:35:57.511137  874848 start.go:495] detecting cgroup driver to use...
	I0914 00:35:57.511169  874848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 00:35:57.511217  874848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 00:35:57.526481  874848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 00:35:57.538293  874848 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:35:57.538364  874848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:35:57.552686  874848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:35:57.568072  874848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:35:57.662991  874848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:35:57.755248  874848 docker.go:233] disabling docker service ...
	I0914 00:35:57.755320  874848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:35:57.774750  874848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:35:57.786925  874848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:35:57.878521  874848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:35:57.968297  874848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:35:57.980122  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:35:57.996615  874848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 00:35:57.996733  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.007909  874848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 00:35:58.008088  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.019602  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.030797  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.040901  874848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:35:58.051366  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.061514  874848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.077469  874848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 00:35:58.087600  874848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:35:58.096431  874848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:35:58.104922  874848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:35:58.194238  874848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 00:35:58.315200  874848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 00:35:58.315290  874848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 00:35:58.319115  874848 start.go:563] Will wait 60s for crictl version
	I0914 00:35:58.319183  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:35:58.322590  874848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:35:58.360321  874848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 00:35:58.360485  874848 ssh_runner.go:195] Run: crio --version
	I0914 00:35:58.401355  874848 ssh_runner.go:195] Run: crio --version
	I0914 00:35:58.441347  874848 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0914 00:35:58.443849  874848 cli_runner.go:164] Run: docker network inspect addons-885748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 00:35:58.459835  874848 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 00:35:58.463371  874848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:35:58.473895  874848 kubeadm.go:883] updating cluster {Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:35:58.474017  874848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:58.474077  874848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:35:58.547909  874848 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:35:58.547932  874848 crio.go:433] Images already preloaded, skipping extraction
	I0914 00:35:58.547987  874848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:35:58.584064  874848 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 00:35:58.584085  874848 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:35:58.584094  874848 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0914 00:35:58.584187  874848 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-885748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:35:58.584272  874848 ssh_runner.go:195] Run: crio config
	I0914 00:35:58.630750  874848 cni.go:84] Creating CNI manager for ""
	I0914 00:35:58.630773  874848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:35:58.630784  874848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:35:58.630808  874848 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-885748 NodeName:addons-885748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:35:58.630990  874848 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-885748"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:35:58.631062  874848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:35:58.639996  874848 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:35:58.640108  874848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:35:58.648765  874848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0914 00:35:58.666409  874848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:35:58.684328  874848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0914 00:35:58.702308  874848 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 00:35:58.705701  874848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:35:58.716106  874848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:35:58.806646  874848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:35:58.820194  874848 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748 for IP: 192.168.49.2
	I0914 00:35:58.820228  874848 certs.go:194] generating shared ca certs ...
	I0914 00:35:58.820260  874848 certs.go:226] acquiring lock for ca certs: {Name:mk51aad7f25871620dee3805dbb159a74d927d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:58.821048  874848 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key
	I0914 00:35:59.115008  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt ...
	I0914 00:35:59.115046  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt: {Name:mk7e420a6f4116f40ba205310e9949cc0a07cff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.115273  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key ...
	I0914 00:35:59.115289  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key: {Name:mk6495fd05c501516a1dbc6a3c5a3d111749eaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.115383  874848 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key
	I0914 00:35:59.669563  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt ...
	I0914 00:35:59.669645  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt: {Name:mk74326826b78a79963a2466e661d640c5de6beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.670798  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key ...
	I0914 00:35:59.670831  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key: {Name:mkaa14c9fcec32cffb1eac0dcfd1682b507c2fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:35:59.671658  874848 certs.go:256] generating profile certs ...
	I0914 00:35:59.671756  874848 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.key
	I0914 00:35:59.671786  874848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt with IP's: []
	I0914 00:36:00.652822  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt ...
	I0914 00:36:00.652865  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: {Name:mk1fbf9bed840a2d57fd0d4fd8e94a75ab019179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.653669  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.key ...
	I0914 00:36:00.653689  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.key: {Name:mkbe4a15da3a2ff3d45a92e0a1634742aa384a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.654315  874848 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37
	I0914 00:36:00.654340  874848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0914 00:36:00.819327  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37 ...
	I0914 00:36:00.819359  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37: {Name:mk886299dc91db0af4189545598b67789e917e31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.820194  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37 ...
	I0914 00:36:00.820213  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37: {Name:mk8b5789c23e69638787fc7a9959d1efbdaf2020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:00.820297  874848 certs.go:381] copying /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt.6be6ca37 -> /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt
	I0914 00:36:00.820377  874848 certs.go:385] copying /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key.6be6ca37 -> /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key
	I0914 00:36:00.820432  874848 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key
	I0914 00:36:00.820453  874848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt with IP's: []
	I0914 00:36:01.002520  874848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt ...
	I0914 00:36:01.002560  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt: {Name:mkb7b3d55ccc68a6a5b5150959ff889ebad35b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:01.002757  874848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key ...
	I0914 00:36:01.002770  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key: {Name:mk1ef6af0211d101b3583380a03915d2b95c5f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:01.003925  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 00:36:01.003979  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:36:01.004010  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:36:01.004036  874848 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem (1679 bytes)
	I0914 00:36:01.004717  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:36:01.031940  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 00:36:01.056903  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:36:01.083162  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 00:36:01.111392  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 00:36:01.147185  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:36:01.177160  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:36:01.205911  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:36:01.233312  874848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:36:01.259299  874848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:36:01.278922  874848 ssh_runner.go:195] Run: openssl version
	I0914 00:36:01.284640  874848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:36:01.296674  874848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:36:01.300328  874848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 00:35 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:36:01.300461  874848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:36:01.307711  874848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:36:01.317582  874848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:36:01.320964  874848 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 00:36:01.321016  874848 kubeadm.go:392] StartCluster: {Name:addons-885748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-885748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:36:01.321148  874848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 00:36:01.321218  874848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:36:01.359180  874848 cri.go:89] found id: ""
	I0914 00:36:01.359294  874848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 00:36:01.368589  874848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 00:36:01.378278  874848 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0914 00:36:01.378369  874848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:36:01.388604  874848 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:36:01.388628  874848 kubeadm.go:157] found existing configuration files:
	
	I0914 00:36:01.388687  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:36:01.397916  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:36:01.398044  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:36:01.407970  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:36:01.418575  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:36:01.418702  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:36:01.428387  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:36:01.437829  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:36:01.437915  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:36:01.446922  874848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:36:01.456143  874848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:36:01.456266  874848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:36:01.465388  874848 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 00:36:01.505666  874848 kubeadm.go:310] W0914 00:36:01.504983    1183 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:36:01.506942  874848 kubeadm.go:310] W0914 00:36:01.506340    1183 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:36:01.533882  874848 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0914 00:36:01.596564  874848 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:36:18.463208  874848 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 00:36:18.463272  874848 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 00:36:18.463364  874848 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0914 00:36:18.463422  874848 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0914 00:36:18.463465  874848 kubeadm.go:310] OS: Linux
	I0914 00:36:18.463513  874848 kubeadm.go:310] CGROUPS_CPU: enabled
	I0914 00:36:18.463569  874848 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0914 00:36:18.463623  874848 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0914 00:36:18.463685  874848 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0914 00:36:18.463738  874848 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0914 00:36:18.463797  874848 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0914 00:36:18.463846  874848 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0914 00:36:18.463898  874848 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0914 00:36:18.463954  874848 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0914 00:36:18.464031  874848 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 00:36:18.464129  874848 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 00:36:18.464222  874848 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 00:36:18.464287  874848 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 00:36:18.468842  874848 out.go:235]   - Generating certificates and keys ...
	I0914 00:36:18.468938  874848 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 00:36:18.469010  874848 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 00:36:18.469086  874848 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 00:36:18.469154  874848 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 00:36:18.469218  874848 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 00:36:18.469284  874848 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 00:36:18.469347  874848 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 00:36:18.469472  874848 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-885748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 00:36:18.469529  874848 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 00:36:18.469646  874848 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-885748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 00:36:18.469714  874848 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 00:36:18.469780  874848 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 00:36:18.469827  874848 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 00:36:18.469885  874848 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 00:36:18.469939  874848 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 00:36:18.469998  874848 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 00:36:18.470057  874848 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 00:36:18.470123  874848 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 00:36:18.470180  874848 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 00:36:18.470264  874848 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 00:36:18.470332  874848 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 00:36:18.472924  874848 out.go:235]   - Booting up control plane ...
	I0914 00:36:18.473034  874848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 00:36:18.473114  874848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 00:36:18.473210  874848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 00:36:18.473402  874848 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 00:36:18.473492  874848 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 00:36:18.473540  874848 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 00:36:18.473674  874848 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 00:36:18.473785  874848 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 00:36:18.473846  874848 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000717079s
	I0914 00:36:18.473919  874848 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 00:36:18.473979  874848 kubeadm.go:310] [api-check] The API server is healthy after 6.001506819s
	I0914 00:36:18.474086  874848 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 00:36:18.474212  874848 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 00:36:18.474272  874848 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 00:36:18.474458  874848 kubeadm.go:310] [mark-control-plane] Marking the node addons-885748 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 00:36:18.474517  874848 kubeadm.go:310] [bootstrap-token] Using token: d5jq5w.vhxle95wpku6sua3
	I0914 00:36:18.477217  874848 out.go:235]   - Configuring RBAC rules ...
	I0914 00:36:18.477426  874848 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 00:36:18.477516  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 00:36:18.477659  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 00:36:18.477798  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 00:36:18.477917  874848 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 00:36:18.478005  874848 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 00:36:18.478122  874848 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 00:36:18.478169  874848 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 00:36:18.478217  874848 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 00:36:18.478225  874848 kubeadm.go:310] 
	I0914 00:36:18.478284  874848 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 00:36:18.478295  874848 kubeadm.go:310] 
	I0914 00:36:18.478372  874848 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 00:36:18.478380  874848 kubeadm.go:310] 
	I0914 00:36:18.478405  874848 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 00:36:18.478483  874848 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 00:36:18.478539  874848 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 00:36:18.478547  874848 kubeadm.go:310] 
	I0914 00:36:18.478601  874848 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 00:36:18.478608  874848 kubeadm.go:310] 
	I0914 00:36:18.478659  874848 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 00:36:18.478666  874848 kubeadm.go:310] 
	I0914 00:36:18.478718  874848 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 00:36:18.478796  874848 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 00:36:18.478865  874848 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 00:36:18.478872  874848 kubeadm.go:310] 
	I0914 00:36:18.478956  874848 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 00:36:18.479036  874848 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 00:36:18.479043  874848 kubeadm.go:310] 
	I0914 00:36:18.479127  874848 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d5jq5w.vhxle95wpku6sua3 \
	I0914 00:36:18.479234  874848 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57751d36d4a8735ba13dc9bb14d661ba8c23675462a620d84c252b50ebcb21ac \
	I0914 00:36:18.479257  874848 kubeadm.go:310] 	--control-plane 
	I0914 00:36:18.479264  874848 kubeadm.go:310] 
	I0914 00:36:18.479348  874848 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 00:36:18.479356  874848 kubeadm.go:310] 
	I0914 00:36:18.479437  874848 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d5jq5w.vhxle95wpku6sua3 \
	I0914 00:36:18.479556  874848 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57751d36d4a8735ba13dc9bb14d661ba8c23675462a620d84c252b50ebcb21ac 
	I0914 00:36:18.479573  874848 cni.go:84] Creating CNI manager for ""
	I0914 00:36:18.479580  874848 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:36:18.482414  874848 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 00:36:18.485202  874848 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 00:36:18.488984  874848 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0914 00:36:18.489018  874848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0914 00:36:18.507633  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 00:36:18.797119  874848 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:36:18.797283  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:18.797372  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-885748 minikube.k8s.io/updated_at=2024_09_14T00_36_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=addons-885748 minikube.k8s.io/primary=true
	I0914 00:36:18.977875  874848 ops.go:34] apiserver oom_adj: -16
	I0914 00:36:18.977984  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:19.478709  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:19.978932  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:20.478468  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:20.978465  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:21.478838  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:21.978427  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:22.478979  874848 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:36:22.570954  874848 kubeadm.go:1113] duration metric: took 3.773734503s to wait for elevateKubeSystemPrivileges
	I0914 00:36:22.570993  874848 kubeadm.go:394] duration metric: took 21.249981733s to StartCluster
	I0914 00:36:22.571028  874848 settings.go:142] acquiring lock: {Name:mk58b1b9b697202ac4a931cd839962dd8a5a8fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:22.571754  874848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 00:36:22.572140  874848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/kubeconfig: {Name:mk4bce51b3b1a0b5e086688a43a01615410b8350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:36:22.572375  874848 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 00:36:22.572521  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 00:36:22.572784  874848 config.go:182] Loaded profile config "addons-885748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:36:22.572823  874848 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 00:36:22.572908  874848 addons.go:69] Setting yakd=true in profile "addons-885748"
	I0914 00:36:22.572928  874848 addons.go:234] Setting addon yakd=true in "addons-885748"
	I0914 00:36:22.572954  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.573611  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.573704  874848 addons.go:69] Setting inspektor-gadget=true in profile "addons-885748"
	I0914 00:36:22.573723  874848 addons.go:234] Setting addon inspektor-gadget=true in "addons-885748"
	I0914 00:36:22.573749  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.574184  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.574511  874848 addons.go:69] Setting cloud-spanner=true in profile "addons-885748"
	I0914 00:36:22.574548  874848 addons.go:234] Setting addon cloud-spanner=true in "addons-885748"
	I0914 00:36:22.574580  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.574991  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.576584  874848 addons.go:69] Setting metrics-server=true in profile "addons-885748"
	I0914 00:36:22.576658  874848 addons.go:234] Setting addon metrics-server=true in "addons-885748"
	I0914 00:36:22.576806  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.577674  874848 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-885748"
	I0914 00:36:22.577697  874848 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-885748"
	I0914 00:36:22.577728  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.578157  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.578583  874848 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-885748"
	I0914 00:36:22.578673  874848 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-885748"
	I0914 00:36:22.578735  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.579310  874848 addons.go:69] Setting default-storageclass=true in profile "addons-885748"
	I0914 00:36:22.579360  874848 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-885748"
	I0914 00:36:22.579654  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.584636  874848 addons.go:69] Setting registry=true in profile "addons-885748"
	I0914 00:36:22.584679  874848 addons.go:234] Setting addon registry=true in "addons-885748"
	I0914 00:36:22.584722  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.585212  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.593639  874848 addons.go:69] Setting gcp-auth=true in profile "addons-885748"
	I0914 00:36:22.593697  874848 mustload.go:65] Loading cluster: addons-885748
	I0914 00:36:22.593833  874848 addons.go:69] Setting ingress=true in profile "addons-885748"
	I0914 00:36:22.593875  874848 addons.go:234] Setting addon ingress=true in "addons-885748"
	I0914 00:36:22.593947  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.594556  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.598655  874848 addons.go:69] Setting storage-provisioner=true in profile "addons-885748"
	I0914 00:36:22.598696  874848 addons.go:234] Setting addon storage-provisioner=true in "addons-885748"
	I0914 00:36:22.598738  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.599322  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.608030  874848 addons.go:69] Setting ingress-dns=true in profile "addons-885748"
	I0914 00:36:22.608066  874848 addons.go:234] Setting addon ingress-dns=true in "addons-885748"
	I0914 00:36:22.608124  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.608719  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.626438  874848 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-885748"
	I0914 00:36:22.626484  874848 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-885748"
	I0914 00:36:22.627015  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.646668  874848 out.go:177] * Verifying Kubernetes components...
	I0914 00:36:22.646964  874848 addons.go:69] Setting volcano=true in profile "addons-885748"
	I0914 00:36:22.646995  874848 addons.go:234] Setting addon volcano=true in "addons-885748"
	I0914 00:36:22.647044  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.647621  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.705055  874848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:36:22.647935  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.727451  874848 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 00:36:22.730961  874848 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 00:36:22.731026  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 00:36:22.731127  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.648200  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.654108  874848 config.go:182] Loaded profile config "addons-885748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:36:22.761663  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.769083  874848 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 00:36:22.764527  874848 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-885748"
	I0914 00:36:22.769445  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.769907  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.656713  874848 addons.go:69] Setting volumesnapshots=true in profile "addons-885748"
	I0914 00:36:22.779473  874848 addons.go:234] Setting addon volumesnapshots=true in "addons-885748"
	I0914 00:36:22.779518  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.779996  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.792421  874848 addons.go:234] Setting addon default-storageclass=true in "addons-885748"
	I0914 00:36:22.792474  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:22.793016  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:22.798974  874848 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 00:36:22.798996  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 00:36:22.799056  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.807827  874848 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 00:36:22.808051  874848 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 00:36:22.813221  874848 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 00:36:22.818788  874848 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:36:22.821693  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:36:22.821715  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 00:36:22.836132  874848 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 00:36:22.836202  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.857667  874848 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:36:22.857695  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 00:36:22.857766  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.833063  874848 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 00:36:22.861715  874848 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 00:36:22.861794  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.901334  874848 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 00:36:22.901725  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:36:22.925512  874848 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 00:36:22.925579  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 00:36:22.925682  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.938211  874848 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	W0914 00:36:22.945665  874848 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0914 00:36:22.970824  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0914 00:36:22.981537  874848 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 00:36:22.981638  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0914 00:36:22.981747  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:22.948860  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 00:36:23.001198  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.002556  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:23.010390  874848 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 00:36:23.010488  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 00:36:23.010589  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.015908  874848 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 00:36:23.020213  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 00:36:23.020313  874848 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 00:36:23.020408  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.010624  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 00:36:23.028044  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 00:36:23.029030  874848 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 00:36:23.029111  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 00:36:23.030953  874848 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 00:36:23.031021  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.030844  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.040780  874848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:36:23.041521  874848 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 00:36:23.041540  874848 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 00:36:23.041615  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.043116  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 00:36:23.045748  874848 out.go:177]   - Using image docker.io/busybox:stable
	I0914 00:36:23.057729  874848 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 00:36:23.057758  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 00:36:23.057830  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.064630  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 00:36:23.067439  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 00:36:23.070090  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 00:36:23.072657  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 00:36:23.077385  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 00:36:23.080086  874848 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 00:36:23.082722  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 00:36:23.082752  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 00:36:23.082824  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:23.102635  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.164403  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.164493  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.181637  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.210074  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.221553  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.231587  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.243309  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.245585  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.246302  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.254741  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:23.413347  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 00:36:23.499318  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 00:36:23.563619  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 00:36:23.563646  874848 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 00:36:23.621774  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:36:23.632161  874848 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 00:36:23.632237  874848 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 00:36:23.658865  874848 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 00:36:23.658965  874848 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 00:36:23.678416  874848 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 00:36:23.678502  874848 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 00:36:23.687807  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 00:36:23.690302  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 00:36:23.690386  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 00:36:23.692343  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 00:36:23.741847  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 00:36:23.741869  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 00:36:23.753765  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 00:36:23.783717  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 00:36:23.783739  874848 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 00:36:23.798983  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 00:36:23.801823  874848 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 00:36:23.801892  874848 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 00:36:23.865228  874848 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 00:36:23.865305  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 00:36:23.896358  874848 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 00:36:23.896422  874848 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 00:36:23.908740  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 00:36:23.908812  874848 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 00:36:23.912605  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 00:36:23.912686  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 00:36:23.950755  874848 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 00:36:23.950831  874848 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 00:36:23.989143  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 00:36:23.989215  874848 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 00:36:24.045999  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 00:36:24.067005  874848 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 00:36:24.067084  874848 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 00:36:24.094537  874848 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 00:36:24.094616  874848 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 00:36:24.121448  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 00:36:24.121549  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 00:36:24.152405  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 00:36:24.152477  874848 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 00:36:24.187995  874848 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 00:36:24.188063  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 00:36:24.248301  874848 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 00:36:24.248379  874848 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 00:36:24.263656  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 00:36:24.263745  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 00:36:24.270468  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 00:36:24.279636  874848 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:36:24.279710  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 00:36:24.361088  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:36:24.363766  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 00:36:24.391930  874848 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 00:36:24.392005  874848 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 00:36:24.404762  874848 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 00:36:24.404847  874848 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 00:36:24.532083  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 00:36:24.532154  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 00:36:24.541629  874848 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 00:36:24.541706  874848 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 00:36:24.586534  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 00:36:24.586608  874848 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 00:36:24.613471  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 00:36:24.613547  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 00:36:24.635565  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 00:36:24.635636  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 00:36:24.636353  874848 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 00:36:24.636397  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 00:36:24.697087  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 00:36:24.718922  874848 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 00:36:24.719002  874848 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 00:36:24.797070  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 00:36:26.012706  874848 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.019755946s)
	I0914 00:36:26.012793  874848 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 00:36:26.013955  874848 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.973152264s)
	I0914 00:36:26.015246  874848 node_ready.go:35] waiting up to 6m0s for node "addons-885748" to be "Ready" ...
	I0914 00:36:26.818351  874848 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-885748" context rescaled to 1 replicas
	I0914 00:36:27.031638  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.618251032s)
	I0914 00:36:27.031747  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.532359077s)
	I0914 00:36:27.943561  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.321702676s)
	I0914 00:36:28.026391  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:29.167249  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.4793608s)
	I0914 00:36:29.167283  874848 addons.go:475] Verifying addon ingress=true in "addons-885748"
	I0914 00:36:29.167350  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.47493769s)
	I0914 00:36:29.167560  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.413773964s)
	I0914 00:36:29.167638  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.36858762s)
	I0914 00:36:29.167755  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.121680395s)
	I0914 00:36:29.167774  874848 addons.go:475] Verifying addon registry=true in "addons-885748"
	I0914 00:36:29.168314  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.897753986s)
	I0914 00:36:29.168337  874848 addons.go:475] Verifying addon metrics-server=true in "addons-885748"
	I0914 00:36:29.170573  874848 out.go:177] * Verifying ingress addon...
	I0914 00:36:29.170589  874848 out.go:177] * Verifying registry addon...
	I0914 00:36:29.173514  874848 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 00:36:29.174597  874848 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 00:36:29.186358  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.825179108s)
	W0914 00:36:29.186394  874848 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 00:36:29.186416  874848 retry.go:31] will retry after 308.598821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 00:36:29.186470  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.822639917s)
	I0914 00:36:29.186790  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.489573307s)
	I0914 00:36:29.190776  874848 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-885748 service yakd-dashboard -n yakd-dashboard
	
	I0914 00:36:29.212739  874848 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 00:36:29.212822  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0914 00:36:29.216216  874848 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0914 00:36:29.217682  874848 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 00:36:29.217744  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:29.495756  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:36:29.512701  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.715530225s)
	I0914 00:36:29.512744  874848 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-885748"
	I0914 00:36:29.515691  874848 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 00:36:29.519486  874848 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 00:36:29.539384  874848 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 00:36:29.539410  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:29.680191  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:29.681532  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:30.038990  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:30.044266  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:30.207217  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:30.207796  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:30.532034  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:30.684159  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:30.690166  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:30.825068  874848 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.329257891s)
	I0914 00:36:31.024298  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:31.191902  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:31.193352  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:31.523717  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:31.679556  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:31.679786  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:32.025958  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:32.177305  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:32.178350  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:32.520588  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:32.524000  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:32.680213  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:32.680810  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:33.030499  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:33.183678  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:33.184449  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:33.377273  874848 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 00:36:33.377351  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:33.401444  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:33.523097  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:33.527304  874848 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 00:36:33.559831  874848 addons.go:234] Setting addon gcp-auth=true in "addons-885748"
	I0914 00:36:33.559879  874848 host.go:66] Checking if "addons-885748" exists ...
	I0914 00:36:33.560345  874848 cli_runner.go:164] Run: docker container inspect addons-885748 --format={{.State.Status}}
	I0914 00:36:33.581054  874848 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 00:36:33.581121  874848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885748
	I0914 00:36:33.600889  874848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/addons-885748/id_rsa Username:docker}
	I0914 00:36:33.680155  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:33.681074  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:33.693949  874848 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:36:33.696552  874848 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 00:36:33.699092  874848 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 00:36:33.699119  874848 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 00:36:33.725491  874848 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 00:36:33.725513  874848 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 00:36:33.757659  874848 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 00:36:33.757696  874848 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 00:36:33.780450  874848 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 00:36:34.023974  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:34.184181  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:34.186273  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:34.384150  874848 addons.go:475] Verifying addon gcp-auth=true in "addons-885748"
	I0914 00:36:34.387248  874848 out.go:177] * Verifying gcp-auth addon...
	I0914 00:36:34.390886  874848 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 00:36:34.407033  874848 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 00:36:34.407059  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:34.523397  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:34.677151  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:34.678963  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:34.894681  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:35.022306  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:35.024438  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:35.180032  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:35.183480  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:35.394703  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:35.523865  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:35.678515  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:35.678806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:35.894818  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:36.023008  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:36.177908  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:36.178956  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:36.394280  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:36.523510  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:36.678580  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:36.679226  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:36.894421  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:37.022844  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:37.024157  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:37.179135  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:37.179370  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:37.394553  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:37.524074  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:37.678080  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:37.679783  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:37.894034  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:38.024946  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:38.177683  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:38.179187  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:38.394540  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:38.522543  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:38.677451  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:38.679196  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:38.894643  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:39.024177  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:39.176814  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:39.178126  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:39.394403  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:39.518971  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:39.523042  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:39.677285  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:39.678415  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:39.894993  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:40.023302  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:40.177076  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:40.179028  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:40.394726  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:40.523300  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:40.678856  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:40.679285  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:40.894199  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:41.023013  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:41.177101  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:41.179290  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:41.394521  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:41.523390  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:41.677524  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:41.678904  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:41.894216  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:42.019193  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:42.023697  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:42.179722  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:42.181177  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:42.394876  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:42.523050  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:42.678341  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:42.679685  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:42.894916  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:43.023233  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:43.178682  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:43.179104  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:43.393946  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:43.523203  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:43.678371  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:43.679407  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:43.894754  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:44.026113  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:44.026149  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:44.177508  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:44.178927  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:44.394341  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:44.523594  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:44.676754  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:44.678698  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:44.893862  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:45.036741  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:45.178582  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:45.179375  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:45.393987  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:45.523508  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:45.677652  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:45.679385  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:45.894863  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:46.022919  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:46.177463  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:46.179089  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:46.394354  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:46.519328  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:46.523445  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:46.681121  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:46.681456  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:46.894381  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:47.028265  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:47.176495  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:47.178087  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:47.394423  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:47.522613  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:47.678292  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:47.679289  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:47.894860  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:48.024213  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:48.179671  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:48.179824  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:48.394247  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:48.519429  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:48.523282  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:48.678115  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:48.678922  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:48.894375  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:49.023908  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:49.177643  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:49.178596  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:49.394010  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:49.522523  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:49.676673  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:49.678634  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:49.893972  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:50.022979  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:50.177122  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:50.178955  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:50.394118  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:50.522454  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:50.677309  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:50.679281  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:50.894463  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:51.018819  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:51.023156  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:51.176878  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:51.178776  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:51.394280  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:51.523133  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:51.677466  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:51.679143  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:51.894631  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:52.023396  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:52.178391  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:52.179266  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:52.397286  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:52.522690  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:52.678357  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:52.679217  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:52.894486  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:53.019119  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:53.023307  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:53.179178  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:53.180677  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:53.394368  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:53.523183  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:53.676964  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:53.678364  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:53.894958  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:54.023821  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:54.177805  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:54.179585  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:54.394992  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:54.522425  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:54.676902  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:54.679155  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:54.894649  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:55.019727  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:55.023310  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:55.178226  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:55.178305  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:55.394779  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:55.522925  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:55.678915  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:55.679410  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:55.894279  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:56.023547  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:56.177234  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:56.178937  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:56.394264  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:56.523247  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:56.677609  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:56.678834  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:56.894510  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:57.023730  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:57.176949  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:57.180346  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:57.394998  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:57.519266  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:36:57.522884  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:57.677959  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:57.678792  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:57.894825  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:58.023518  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:58.178392  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:58.179460  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:58.394911  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:58.522691  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:58.677369  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:58.678790  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:58.895085  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:59.022531  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:59.177915  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:59.178932  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:59.394803  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:36:59.522983  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:36:59.677060  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:36:59.678616  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:36:59.894389  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:00.020453  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:00.066148  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:00.187116  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:00.189373  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:00.395382  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:00.523090  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:00.677052  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:00.679313  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:00.894619  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:01.022440  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:01.177549  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:01.179048  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:01.394471  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:01.522894  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:01.678225  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:01.679589  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:01.894931  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:02.023563  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:02.176988  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:02.179706  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:02.394268  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:02.519079  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:02.522948  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:02.676989  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:02.679144  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:02.896226  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:03.022882  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:03.177959  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:03.179565  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:03.394297  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:03.523549  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:03.677616  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:03.679072  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:03.894507  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:04.023469  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:04.177349  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:04.179232  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:04.393784  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:04.522550  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:04.676854  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:04.678639  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:04.895316  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:05.018889  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:05.023416  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:05.177247  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:05.179114  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:05.394990  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:05.522362  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:05.678794  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:05.678970  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:05.893966  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:06.023096  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:06.177160  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:06.177654  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:06.394767  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:06.522308  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:06.678541  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:06.679024  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:06.894066  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:07.019192  874848 node_ready.go:53] node "addons-885748" has status "Ready":"False"
	I0914 00:37:07.023773  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:07.177818  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:07.178447  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:07.396991  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:07.536462  874848 node_ready.go:49] node "addons-885748" has status "Ready":"True"
	I0914 00:37:07.536540  874848 node_ready.go:38] duration metric: took 41.52122498s for node "addons-885748" to be "Ready" ...
	I0914 00:37:07.536564  874848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:37:07.545962  874848 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 00:37:07.545989  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:07.560429  874848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8m89r" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:07.735954  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:07.737075  874848 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 00:37:07.737140  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:07.904390  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:08.025045  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:08.203301  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:08.204003  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:08.398762  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:08.524366  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:08.680865  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:08.681279  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:08.900177  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:09.025214  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:09.185641  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:09.187308  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:09.397596  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:09.524714  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:09.567577  874848 pod_ready.go:93] pod "coredns-7c65d6cfc9-8m89r" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.567609  874848 pod_ready.go:82] duration metric: took 2.007088321s for pod "coredns-7c65d6cfc9-8m89r" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.567631  874848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.579438  874848 pod_ready.go:93] pod "etcd-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.579467  874848 pod_ready.go:82] duration metric: took 11.821727ms for pod "etcd-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.579484  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.585364  874848 pod_ready.go:93] pod "kube-apiserver-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.585385  874848 pod_ready.go:82] duration metric: took 5.89278ms for pod "kube-apiserver-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.585397  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.592284  874848 pod_ready.go:93] pod "kube-controller-manager-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.592307  874848 pod_ready.go:82] duration metric: took 6.902865ms for pod "kube-controller-manager-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.592321  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dqs2h" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.602562  874848 pod_ready.go:93] pod "kube-proxy-dqs2h" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.602588  874848 pod_ready.go:82] duration metric: took 10.259695ms for pod "kube-proxy-dqs2h" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.602600  874848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.681569  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:09.682934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:09.897633  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:09.965408  874848 pod_ready.go:93] pod "kube-scheduler-addons-885748" in "kube-system" namespace has status "Ready":"True"
	I0914 00:37:09.965433  874848 pod_ready.go:82] duration metric: took 362.810925ms for pod "kube-scheduler-addons-885748" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:09.965445  874848 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace to be "Ready" ...
	I0914 00:37:10.026493  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:10.179971  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:10.182101  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:10.395509  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:10.526262  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:10.677859  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:10.679418  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:10.895078  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:11.025168  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:11.178262  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:11.178738  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:11.395621  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:11.524715  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:11.679184  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:11.679849  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:11.895381  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:11.971662  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:12.025550  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:12.181156  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:12.182835  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:12.394926  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:12.531168  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:12.679085  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:12.680606  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:12.895370  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:13.024873  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:13.177451  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:13.180418  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:13.395380  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:13.525613  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:13.680824  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:13.681895  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:13.895500  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:14.025845  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:14.179688  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:14.180935  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:14.394764  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:14.471939  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:14.524071  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:14.677438  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:14.679890  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:14.894272  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:15.025996  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:15.178028  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:15.180621  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:15.395485  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:15.523999  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:15.678417  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:15.678947  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:15.894558  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:16.025025  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:16.178905  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:16.180441  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:16.395296  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:16.473887  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:16.525561  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:16.678894  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:16.680480  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:16.896742  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:17.026245  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:17.182060  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:17.184538  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:17.395416  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:17.526035  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:17.681664  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:17.683402  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:17.895817  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:18.025795  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:18.181775  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:18.181945  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:18.395803  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:18.524488  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:18.677526  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:18.681893  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:18.894318  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:18.972325  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:19.024913  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:19.178927  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:19.181186  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:19.394794  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:19.524419  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:19.679432  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:19.680935  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:19.894744  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:20.024634  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:20.178560  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:20.179605  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:20.394255  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:20.524521  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:20.676974  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:20.684973  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:20.894834  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:21.025765  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:21.180351  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:21.181362  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:21.399049  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:21.475732  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:21.528175  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:21.680488  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:21.681930  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:21.894250  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:22.024817  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:22.177928  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:22.179422  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:22.394499  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:22.525146  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:22.678627  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:22.680308  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:22.895246  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:23.025863  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:23.177031  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:23.180339  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:23.396492  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:23.524470  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:23.678638  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:23.679382  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:23.895207  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:23.977712  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:24.029304  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:24.180362  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:24.183282  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:24.396641  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:24.529357  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:24.682468  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:24.684392  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:24.895280  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:25.025730  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:25.181572  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:25.183727  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:25.395405  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:25.524566  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:25.682333  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:25.683779  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:25.901528  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:25.980022  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:26.035812  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:26.184465  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:26.185902  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:26.399183  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:26.525590  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:26.684422  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:26.685595  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:26.895348  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:27.024667  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:27.178206  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:27.179539  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:27.395704  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:27.525158  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:27.679873  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:27.680479  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:27.897323  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:28.024852  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:28.179057  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:28.179565  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:28.394541  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:28.471727  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:28.524321  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:28.679544  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:28.680139  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:28.894850  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:29.024419  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:29.179889  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:29.180105  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:29.394579  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:29.525631  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:29.677384  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:29.679484  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:29.895140  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:30.039214  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:30.181482  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:30.191420  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:30.394455  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:30.472915  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:30.527806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:30.682674  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:30.686718  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:30.894383  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:31.025227  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:31.179957  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:31.181007  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:31.394945  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:31.524555  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:31.677768  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:31.680475  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:31.894695  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:32.025054  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:32.177978  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:32.179683  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:32.395007  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:32.524604  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:32.678592  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:32.679923  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:32.894812  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:32.973284  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:33.026118  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:33.180514  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:33.181997  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:33.394813  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:33.525731  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:33.680977  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:33.682906  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:33.895068  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:34.025233  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:34.179148  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:34.182917  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:34.395651  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:34.526147  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:34.684709  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:34.686035  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:34.895105  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:35.026583  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:35.183369  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:35.185238  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:35.394700  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:35.472721  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:35.525329  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:35.680355  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:35.681672  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:35.894138  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:36.025921  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:36.180639  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:36.181945  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:36.395181  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:36.525218  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:36.683856  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:36.688534  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:36.895037  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:37.026844  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:37.180732  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:37.181806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:37.395037  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:37.473242  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:37.525608  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:37.685407  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:37.688853  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:37.896431  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:38.026407  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:38.178835  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:38.180018  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:38.395082  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:38.524569  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:38.679237  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:38.680243  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:38.895023  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:39.024964  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:39.178384  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:39.180349  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:39.394794  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:39.524736  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:39.679043  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:39.680200  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:39.894788  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:39.972306  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:40.026022  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:40.178841  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:40.180531  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:40.396615  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:40.526316  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:40.679017  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:40.681396  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:40.895111  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:41.025310  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:41.180533  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:41.182040  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:41.394933  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:41.525086  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:41.678595  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:41.681805  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:41.894264  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:41.972495  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:42.027746  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:42.181385  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:42.183231  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:42.395231  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:42.525359  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:42.686622  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:42.687690  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:42.894396  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:43.026528  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:43.178411  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:43.180614  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:43.394781  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:43.526157  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:43.678825  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:43.680171  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:43.894755  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:43.974886  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:44.025244  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:44.180632  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:44.180874  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:44.394573  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:44.525492  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:44.679244  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:44.680033  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:37:44.894811  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:45.026849  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:45.180224  874848 kapi.go:107] duration metric: took 1m16.006705195s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 00:37:45.181335  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:45.394742  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:45.524737  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:45.678853  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:45.894538  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:46.025270  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:46.180187  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:46.420542  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:46.473873  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:46.527047  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:46.690644  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:46.895272  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:47.025191  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:47.180081  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:47.394774  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:47.524580  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:47.679051  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:47.895292  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:48.028400  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:48.181125  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:48.395824  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:48.526249  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:48.680363  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:48.894934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:48.973378  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:49.024934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:49.180049  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:49.394655  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:49.525575  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:49.678950  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:49.895006  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:50.027602  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:50.180508  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:50.395361  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:50.525087  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:50.679749  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:50.894761  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:50.974000  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:51.026760  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:51.180269  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:51.395059  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:51.525416  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:51.678919  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:51.895522  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:52.025040  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:52.179046  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:52.394934  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:52.524525  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:52.679988  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:52.894201  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:53.024676  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:53.179453  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:53.394512  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:53.471888  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:53.523864  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:53.679106  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:53.896220  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:54.024917  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:54.180335  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:54.396636  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:54.526135  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:54.679867  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:54.912541  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:55.034071  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:55.179674  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:55.395396  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:55.473836  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:55.528494  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:55.680286  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:55.895006  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:56.027576  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:56.179160  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:56.398064  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:56.525392  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:56.680283  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:56.895453  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:57.024877  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:57.179495  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:57.395302  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:57.526310  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:57.678953  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:57.894929  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:57.978490  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:37:58.024590  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:58.182030  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:58.396784  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:58.526220  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:58.679161  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:58.894516  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:59.045878  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:59.183420  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:59.395337  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:37:59.525591  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:37:59.679994  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:37:59.896190  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:00.044921  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:00.276537  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:00.395763  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:00.472688  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:00.524316  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:00.679693  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:00.894767  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:01.024436  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:01.182184  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:01.396167  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:01.525666  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:01.679817  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:01.895495  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:02.026019  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:02.180745  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:02.395882  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:02.525241  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:02.679057  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:02.894767  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:02.975993  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:03.025801  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:03.180760  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:03.395339  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:03.526291  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:03.679567  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:03.899232  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:04.024210  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:04.179325  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:04.395706  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:04.524231  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:04.679479  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:04.894905  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:05.027840  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:05.182382  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:05.396023  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:05.473085  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:05.525806  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:05.679191  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:05.896374  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:06.029480  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:06.178890  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:06.395107  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:06.525046  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:06.679017  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:06.894377  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:07.024327  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:07.178898  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:07.398532  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:07.473351  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:07.525318  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:07.681140  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:07.894913  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:08.027202  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:08.182979  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:08.395165  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:08.524187  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:38:08.694704  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:08.895184  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:09.030109  874848 kapi.go:107] duration metric: took 1m39.510623393s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 00:38:09.179285  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:09.395413  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:09.679463  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:09.895033  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:09.973120  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:10.179271  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:10.394612  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:10.679453  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:10.895174  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:11.178632  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:11.395338  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:11.679157  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:11.894373  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:12.179833  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:12.394349  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:12.471290  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:12.679175  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:12.895106  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:13.178590  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:13.396117  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:13.680434  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:13.894967  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:14.180563  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:14.396225  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:14.471905  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:14.679676  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:14.895516  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:15.179205  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:15.396426  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:15.679433  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:15.894213  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:16.179496  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:16.395328  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:16.476000  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:16.680237  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:16.896031  874848 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:38:17.180049  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:17.395144  874848 kapi.go:107] duration metric: took 1m43.00425795s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 00:38:17.398321  874848 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-885748 cluster.
	I0914 00:38:17.400983  874848 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 00:38:17.403694  874848 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 00:38:17.679164  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:18.180368  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:18.483241  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:18.680385  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:19.184791  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:19.679772  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:20.180317  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:20.680340  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:20.972195  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:21.178702  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:21.679238  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:22.180491  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:22.681102  874848 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:38:23.189543  874848 kapi.go:107] duration metric: took 1m54.01494077s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 00:38:23.191218  874848 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0914 00:38:23.192745  874848 addons.go:510] duration metric: took 2m0.619913914s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0914 00:38:23.475147  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:25.971975  874848 pod_ready.go:103] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"False"
	I0914 00:38:28.477226  874848 pod_ready.go:93] pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace has status "Ready":"True"
	I0914 00:38:28.477369  874848 pod_ready.go:82] duration metric: took 1m18.511914681s for pod "metrics-server-84c5f94fbc-96xbg" in "kube-system" namespace to be "Ready" ...
	I0914 00:38:28.477405  874848 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9nphx" in "kube-system" namespace to be "Ready" ...
	I0914 00:38:28.484642  874848 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9nphx" in "kube-system" namespace has status "Ready":"True"
	I0914 00:38:28.484732  874848 pod_ready.go:82] duration metric: took 7.280703ms for pod "nvidia-device-plugin-daemonset-9nphx" in "kube-system" namespace to be "Ready" ...
	I0914 00:38:28.484774  874848 pod_ready.go:39] duration metric: took 1m20.948183548s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:38:28.484841  874848 api_server.go:52] waiting for apiserver process to appear ...
	I0914 00:38:28.484919  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:38:28.485034  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:38:28.547414  874848 cri.go:89] found id: "48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:28.547485  874848 cri.go:89] found id: ""
	I0914 00:38:28.547506  874848 logs.go:276] 1 containers: [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4]
	I0914 00:38:28.547595  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.551987  874848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:38:28.552116  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:38:28.598910  874848 cri.go:89] found id: "f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:28.598933  874848 cri.go:89] found id: ""
	I0914 00:38:28.598941  874848 logs.go:276] 1 containers: [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7]
	I0914 00:38:28.599013  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.602400  874848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:38:28.602560  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:38:28.644171  874848 cri.go:89] found id: "80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:28.644192  874848 cri.go:89] found id: ""
	I0914 00:38:28.644201  874848 logs.go:276] 1 containers: [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9]
	I0914 00:38:28.644254  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.647972  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:38:28.648065  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:38:28.684644  874848 cri.go:89] found id: "d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:28.684667  874848 cri.go:89] found id: ""
	I0914 00:38:28.684675  874848 logs.go:276] 1 containers: [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289]
	I0914 00:38:28.684761  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.689599  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:38:28.689693  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:38:28.727470  874848 cri.go:89] found id: "a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:28.727491  874848 cri.go:89] found id: ""
	I0914 00:38:28.727499  874848 logs.go:276] 1 containers: [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887]
	I0914 00:38:28.727552  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.731365  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:38:28.731447  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:38:28.771519  874848 cri.go:89] found id: "f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:28.771541  874848 cri.go:89] found id: ""
	I0914 00:38:28.771550  874848 logs.go:276] 1 containers: [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da]
	I0914 00:38:28.771625  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.775121  874848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:38:28.775189  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:38:28.814792  874848 cri.go:89] found id: "56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:28.814816  874848 cri.go:89] found id: ""
	I0914 00:38:28.814824  874848 logs.go:276] 1 containers: [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d]
	I0914 00:38:28.814877  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:28.818284  874848 logs.go:123] Gathering logs for kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] ...
	I0914 00:38:28.818307  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:28.891320  874848 logs.go:123] Gathering logs for etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] ...
	I0914 00:38:28.891360  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:28.937126  874848 logs.go:123] Gathering logs for coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] ...
	I0914 00:38:28.937157  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:28.983373  874848 logs.go:123] Gathering logs for kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] ...
	I0914 00:38:28.983404  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:29.030599  874848 logs.go:123] Gathering logs for container status ...
	I0914 00:38:29.030626  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:38:29.088803  874848 logs.go:123] Gathering logs for kubelet ...
	I0914 00:38:29.088834  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 00:38:29.133183  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.432207    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.133455  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.432263    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.133676  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.439952    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.133911  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440000    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.134100  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.440181    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.134329  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440215    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.134541  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461184    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.134794  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461241    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.134992  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461318    1502 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.135240  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.135446  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.135709  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.135907  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.136142  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:29.192135  874848 logs.go:123] Gathering logs for dmesg ...
	I0914 00:38:29.192184  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:38:29.210094  874848 logs.go:123] Gathering logs for kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] ...
	I0914 00:38:29.210125  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:29.301224  874848 logs.go:123] Gathering logs for kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] ...
	I0914 00:38:29.301271  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:29.347119  874848 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:38:29.347147  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:38:29.444517  874848 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:38:29.444551  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:38:29.632311  874848 logs.go:123] Gathering logs for kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] ...
	I0914 00:38:29.632339  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:29.679537  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:29.679564  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 00:38:29.679625  874848 out.go:270] X Problems detected in kubelet:
	W0914 00:38:29.679636  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.679644  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.679651  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:29.679704  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:29.679712  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:29.679719  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:29.679725  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:38:39.681411  874848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:38:39.695346  874848 api_server.go:72] duration metric: took 2m17.122934524s to wait for apiserver process to appear ...
	I0914 00:38:39.695371  874848 api_server.go:88] waiting for apiserver healthz status ...
	I0914 00:38:39.695407  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:38:39.695463  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:38:39.743999  874848 cri.go:89] found id: "48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:39.744019  874848 cri.go:89] found id: ""
	I0914 00:38:39.744026  874848 logs.go:276] 1 containers: [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4]
	I0914 00:38:39.744108  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.748186  874848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:38:39.748271  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:38:39.786567  874848 cri.go:89] found id: "f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:39.786591  874848 cri.go:89] found id: ""
	I0914 00:38:39.786600  874848 logs.go:276] 1 containers: [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7]
	I0914 00:38:39.786673  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.790106  874848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:38:39.790172  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:38:39.830802  874848 cri.go:89] found id: "80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:39.830825  874848 cri.go:89] found id: ""
	I0914 00:38:39.830832  874848 logs.go:276] 1 containers: [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9]
	I0914 00:38:39.830891  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.834483  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:38:39.834578  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:38:39.873400  874848 cri.go:89] found id: "d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:39.873426  874848 cri.go:89] found id: ""
	I0914 00:38:39.873435  874848 logs.go:276] 1 containers: [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289]
	I0914 00:38:39.873493  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.877489  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:38:39.877568  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:38:39.915990  874848 cri.go:89] found id: "a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:39.916016  874848 cri.go:89] found id: ""
	I0914 00:38:39.916025  874848 logs.go:276] 1 containers: [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887]
	I0914 00:38:39.916112  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.919561  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:38:39.919637  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:38:39.957315  874848 cri.go:89] found id: "f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:39.957383  874848 cri.go:89] found id: ""
	I0914 00:38:39.957405  874848 logs.go:276] 1 containers: [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da]
	I0914 00:38:39.957474  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:39.960827  874848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:38:39.960894  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:38:40.000698  874848 cri.go:89] found id: "56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:40.000764  874848 cri.go:89] found id: ""
	I0914 00:38:40.000787  874848 logs.go:276] 1 containers: [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d]
	I0914 00:38:40.000868  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:40.009160  874848 logs.go:123] Gathering logs for kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] ...
	I0914 00:38:40.009238  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:40.063889  874848 logs.go:123] Gathering logs for kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] ...
	I0914 00:38:40.063916  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:40.140420  874848 logs.go:123] Gathering logs for container status ...
	I0914 00:38:40.140455  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:38:40.191420  874848 logs.go:123] Gathering logs for kubelet ...
	I0914 00:38:40.191454  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 00:38:40.233432  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.432207    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.233678  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.432263    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.233863  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.439952    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.234086  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440000    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.234255  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.440181    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.234464  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440215    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.234649  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461184    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.234875  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461241    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.235058  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461318    1502 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.235282  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.235469  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.235697  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.235870  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.236085  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:40.287929  874848 logs.go:123] Gathering logs for dmesg ...
	I0914 00:38:40.287960  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:38:40.304167  874848 logs.go:123] Gathering logs for coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] ...
	I0914 00:38:40.304197  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:40.351418  874848 logs.go:123] Gathering logs for kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] ...
	I0914 00:38:40.351450  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:40.405932  874848 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:38:40.405964  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:38:40.500837  874848 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:38:40.500877  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:38:40.647711  874848 logs.go:123] Gathering logs for kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] ...
	I0914 00:38:40.647741  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:40.699610  874848 logs.go:123] Gathering logs for etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] ...
	I0914 00:38:40.699643  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:40.758127  874848 logs.go:123] Gathering logs for kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] ...
	I0914 00:38:40.758155  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:40.808598  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:40.808623  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 00:38:40.808730  874848 out.go:270] X Problems detected in kubelet:
	W0914 00:38:40.808745  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.808772  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.808781  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:40.808787  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:40.808793  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:40.808806  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:40.808813  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:38:50.810748  874848 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 00:38:50.820324  874848 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 00:38:50.821343  874848 api_server.go:141] control plane version: v1.31.1
	I0914 00:38:50.821369  874848 api_server.go:131] duration metric: took 11.125990917s to wait for apiserver health ...
	I0914 00:38:50.821379  874848 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:38:50.821403  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 00:38:50.821465  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 00:38:50.857789  874848 cri.go:89] found id: "48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:50.857812  874848 cri.go:89] found id: ""
	I0914 00:38:50.857820  874848 logs.go:276] 1 containers: [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4]
	I0914 00:38:50.857879  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:50.862216  874848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 00:38:50.862284  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 00:38:50.900268  874848 cri.go:89] found id: "f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:50.900291  874848 cri.go:89] found id: ""
	I0914 00:38:50.900299  874848 logs.go:276] 1 containers: [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7]
	I0914 00:38:50.900373  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:50.903842  874848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 00:38:50.903933  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 00:38:50.942518  874848 cri.go:89] found id: "80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:50.942541  874848 cri.go:89] found id: ""
	I0914 00:38:50.942549  874848 logs.go:276] 1 containers: [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9]
	I0914 00:38:50.942619  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:50.946096  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 00:38:50.946185  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 00:38:51.008164  874848 cri.go:89] found id: "d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:51.008212  874848 cri.go:89] found id: ""
	I0914 00:38:51.008227  874848 logs.go:276] 1 containers: [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289]
	I0914 00:38:51.008295  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.013303  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 00:38:51.013405  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 00:38:51.060066  874848 cri.go:89] found id: "a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:51.060149  874848 cri.go:89] found id: ""
	I0914 00:38:51.060172  874848 logs.go:276] 1 containers: [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887]
	I0914 00:38:51.060263  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.064118  874848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 00:38:51.064238  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 00:38:51.110490  874848 cri.go:89] found id: "f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:51.110528  874848 cri.go:89] found id: ""
	I0914 00:38:51.110537  874848 logs.go:276] 1 containers: [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da]
	I0914 00:38:51.110602  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.114745  874848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 00:38:51.114821  874848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 00:38:51.160743  874848 cri.go:89] found id: "56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:51.160763  874848 cri.go:89] found id: ""
	I0914 00:38:51.160771  874848 logs.go:276] 1 containers: [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d]
	I0914 00:38:51.160828  874848 ssh_runner.go:195] Run: which crictl
	I0914 00:38:51.164783  874848 logs.go:123] Gathering logs for kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] ...
	I0914 00:38:51.164809  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289"
	I0914 00:38:51.215849  874848 logs.go:123] Gathering logs for kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] ...
	I0914 00:38:51.215885  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da"
	I0914 00:38:51.312761  874848 logs.go:123] Gathering logs for kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] ...
	I0914 00:38:51.312793  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d"
	I0914 00:38:51.353667  874848 logs.go:123] Gathering logs for CRI-O ...
	I0914 00:38:51.353697  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 00:38:51.448552  874848 logs.go:123] Gathering logs for container status ...
	I0914 00:38:51.448591  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 00:38:51.500391  874848 logs.go:123] Gathering logs for kubelet ...
	I0914 00:38:51.500420  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 00:38:51.527174  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.432207    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.527480  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.432263    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.527688  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.439952    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.527942  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440000    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.528142  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.440181    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.528385  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.440215    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.528603  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461184    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.528866  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461241    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.529094  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461318    1502 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-885748" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.529366  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.529580  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.529810  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.529984  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.530197  874848 logs.go:138] Found kubelet problem: Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:51.594195  874848 logs.go:123] Gathering logs for coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] ...
	I0914 00:38:51.594227  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9"
	I0914 00:38:51.635725  874848 logs.go:123] Gathering logs for kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] ...
	I0914 00:38:51.635758  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4"
	I0914 00:38:51.704376  874848 logs.go:123] Gathering logs for etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] ...
	I0914 00:38:51.704410  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7"
	I0914 00:38:51.757616  874848 logs.go:123] Gathering logs for kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] ...
	I0914 00:38:51.757649  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887"
	I0914 00:38:51.796955  874848 logs.go:123] Gathering logs for dmesg ...
	I0914 00:38:51.796986  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 00:38:51.815711  874848 logs.go:123] Gathering logs for describe nodes ...
	I0914 00:38:51.815779  874848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 00:38:51.950032  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:51.950064  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 00:38:51.950122  874848 out.go:270] X Problems detected in kubelet:
	W0914 00:38:51.950135  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461343    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.950143  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.461389    1502 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.950157  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.461404    1502 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	W0914 00:38:51.950164  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: W0914 00:37:07.466898    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-885748" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-885748' and this object
	W0914 00:38:51.950177  874848 out.go:270]   Sep 14 00:37:07 addons-885748 kubelet[1502]: E0914 00:37:07.466946    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-885748\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-885748' and this object" logger="UnhandledError"
	I0914 00:38:51.950183  874848 out.go:358] Setting ErrFile to fd 2...
	I0914 00:38:51.950190  874848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:39:01.963900  874848 system_pods.go:59] 18 kube-system pods found
	I0914 00:39:01.963965  874848 system_pods.go:61] "coredns-7c65d6cfc9-8m89r" [550228bd-69a1-4530-af98-0200cecdabf1] Running
	I0914 00:39:01.963975  874848 system_pods.go:61] "csi-hostpath-attacher-0" [cbc09b3c-e59c-4698-b6c7-f9d1746ab697] Running
	I0914 00:39:01.964017  874848 system_pods.go:61] "csi-hostpath-resizer-0" [1d0b01fe-048b-4b9e-82dd-5b408414180f] Running
	I0914 00:39:01.964026  874848 system_pods.go:61] "csi-hostpathplugin-mgx77" [456dedd2-11aa-43aa-8f21-e93340384161] Running
	I0914 00:39:01.964031  874848 system_pods.go:61] "etcd-addons-885748" [76fc0bec-b6e2-415d-8c2a-3bdb3f6bf113] Running
	I0914 00:39:01.964035  874848 system_pods.go:61] "kindnet-m55kx" [724646d8-f3df-4b7c-830a-ec84d16dc1c6] Running
	I0914 00:39:01.964040  874848 system_pods.go:61] "kube-apiserver-addons-885748" [c6447df2-c534-4e85-afc8-5da7d2435aa6] Running
	I0914 00:39:01.964045  874848 system_pods.go:61] "kube-controller-manager-addons-885748" [9727b4e8-1fa1-4175-b2ce-7bdd6ac0676c] Running
	I0914 00:39:01.964050  874848 system_pods.go:61] "kube-ingress-dns-minikube" [e6eb7e3a-203d-452a-b040-fbe431e6f08f] Running
	I0914 00:39:01.964054  874848 system_pods.go:61] "kube-proxy-dqs2h" [ad11d9fd-caaa-4026-86f8-aba3e5ac2834] Running
	I0914 00:39:01.964090  874848 system_pods.go:61] "kube-scheduler-addons-885748" [ae7fd70d-d206-474f-a967-53dc9227db19] Running
	I0914 00:39:01.964102  874848 system_pods.go:61] "metrics-server-84c5f94fbc-96xbg" [9c339307-23c2-46f3-af0b-9a4d12c82b32] Running
	I0914 00:39:01.964107  874848 system_pods.go:61] "nvidia-device-plugin-daemonset-9nphx" [8f3b2546-ef55-49b2-8f31-dd8f4ecdcf93] Running
	I0914 00:39:01.964113  874848 system_pods.go:61] "registry-66c9cd494c-bkhkl" [4d931f29-d87c-4bc8-8e58-88b441e56b0a] Running
	I0914 00:39:01.964118  874848 system_pods.go:61] "registry-proxy-fb2vb" [7d63ca1e-f5bf-47eb-84af-ebd01e9cd4b6] Running
	I0914 00:39:01.964127  874848 system_pods.go:61] "snapshot-controller-56fcc65765-8pfcj" [37872304-9181-40b4-8ebf-9958cdc3a7b0] Running
	I0914 00:39:01.964132  874848 system_pods.go:61] "snapshot-controller-56fcc65765-nwsdn" [bb956da0-8552-4d95-a92d-8a7311005caf] Running
	I0914 00:39:01.964136  874848 system_pods.go:61] "storage-provisioner" [c95fe42f-e257-4b52-ab42-54086f64f2e4] Running
	I0914 00:39:01.964143  874848 system_pods.go:74] duration metric: took 11.142756624s to wait for pod list to return data ...
	I0914 00:39:01.964165  874848 default_sa.go:34] waiting for default service account to be created ...
	I0914 00:39:01.967349  874848 default_sa.go:45] found service account: "default"
	I0914 00:39:01.967378  874848 default_sa.go:55] duration metric: took 3.206253ms for default service account to be created ...
	I0914 00:39:01.967389  874848 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 00:39:01.979121  874848 system_pods.go:86] 18 kube-system pods found
	I0914 00:39:01.979159  874848 system_pods.go:89] "coredns-7c65d6cfc9-8m89r" [550228bd-69a1-4530-af98-0200cecdabf1] Running
	I0914 00:39:01.979168  874848 system_pods.go:89] "csi-hostpath-attacher-0" [cbc09b3c-e59c-4698-b6c7-f9d1746ab697] Running
	I0914 00:39:01.979173  874848 system_pods.go:89] "csi-hostpath-resizer-0" [1d0b01fe-048b-4b9e-82dd-5b408414180f] Running
	I0914 00:39:01.979178  874848 system_pods.go:89] "csi-hostpathplugin-mgx77" [456dedd2-11aa-43aa-8f21-e93340384161] Running
	I0914 00:39:01.979183  874848 system_pods.go:89] "etcd-addons-885748" [76fc0bec-b6e2-415d-8c2a-3bdb3f6bf113] Running
	I0914 00:39:01.979189  874848 system_pods.go:89] "kindnet-m55kx" [724646d8-f3df-4b7c-830a-ec84d16dc1c6] Running
	I0914 00:39:01.979194  874848 system_pods.go:89] "kube-apiserver-addons-885748" [c6447df2-c534-4e85-afc8-5da7d2435aa6] Running
	I0914 00:39:01.979199  874848 system_pods.go:89] "kube-controller-manager-addons-885748" [9727b4e8-1fa1-4175-b2ce-7bdd6ac0676c] Running
	I0914 00:39:01.979210  874848 system_pods.go:89] "kube-ingress-dns-minikube" [e6eb7e3a-203d-452a-b040-fbe431e6f08f] Running
	I0914 00:39:01.979215  874848 system_pods.go:89] "kube-proxy-dqs2h" [ad11d9fd-caaa-4026-86f8-aba3e5ac2834] Running
	I0914 00:39:01.979222  874848 system_pods.go:89] "kube-scheduler-addons-885748" [ae7fd70d-d206-474f-a967-53dc9227db19] Running
	I0914 00:39:01.979226  874848 system_pods.go:89] "metrics-server-84c5f94fbc-96xbg" [9c339307-23c2-46f3-af0b-9a4d12c82b32] Running
	I0914 00:39:01.979243  874848 system_pods.go:89] "nvidia-device-plugin-daemonset-9nphx" [8f3b2546-ef55-49b2-8f31-dd8f4ecdcf93] Running
	I0914 00:39:01.979273  874848 system_pods.go:89] "registry-66c9cd494c-bkhkl" [4d931f29-d87c-4bc8-8e58-88b441e56b0a] Running
	I0914 00:39:01.979280  874848 system_pods.go:89] "registry-proxy-fb2vb" [7d63ca1e-f5bf-47eb-84af-ebd01e9cd4b6] Running
	I0914 00:39:01.979284  874848 system_pods.go:89] "snapshot-controller-56fcc65765-8pfcj" [37872304-9181-40b4-8ebf-9958cdc3a7b0] Running
	I0914 00:39:01.979288  874848 system_pods.go:89] "snapshot-controller-56fcc65765-nwsdn" [bb956da0-8552-4d95-a92d-8a7311005caf] Running
	I0914 00:39:01.979292  874848 system_pods.go:89] "storage-provisioner" [c95fe42f-e257-4b52-ab42-54086f64f2e4] Running
	I0914 00:39:01.979298  874848 system_pods.go:126] duration metric: took 11.903645ms to wait for k8s-apps to be running ...
	I0914 00:39:01.979308  874848 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 00:39:01.979371  874848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:39:01.992649  874848 system_svc.go:56] duration metric: took 13.330968ms WaitForService to wait for kubelet
	I0914 00:39:01.992681  874848 kubeadm.go:582] duration metric: took 2m39.420274083s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:39:01.992702  874848 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:39:01.996886  874848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 00:39:01.996922  874848 node_conditions.go:123] node cpu capacity is 2
	I0914 00:39:01.996936  874848 node_conditions.go:105] duration metric: took 4.227243ms to run NodePressure ...
	I0914 00:39:01.996950  874848 start.go:241] waiting for startup goroutines ...
	I0914 00:39:01.996958  874848 start.go:246] waiting for cluster config update ...
	I0914 00:39:01.996976  874848 start.go:255] writing updated cluster config ...
	I0914 00:39:01.997319  874848 ssh_runner.go:195] Run: rm -f paused
	I0914 00:39:02.385531  874848 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 00:39:02.387155  874848 out.go:177] * Done! kubectl is now configured to use "addons-885748" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 00:52:18 addons-885748 crio[965]: time="2024-09-14 00:52:18.370891178Z" level=info msg="Stopped pod sandbox (already stopped): 08cadfc75ed5b8d5f712b2d09326eaf2f3b8ef8aac6734c6ff4179d6343dc336" id=6c38c603-ad73-4cd8-8c30-b917af642fe3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:52:18 addons-885748 crio[965]: time="2024-09-14 00:52:18.371568544Z" level=info msg="Removing pod sandbox: 08cadfc75ed5b8d5f712b2d09326eaf2f3b8ef8aac6734c6ff4179d6343dc336" id=ac0be222-82d8-462d-863d-c7b93a6b70ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:52:18 addons-885748 crio[965]: time="2024-09-14 00:52:18.379111638Z" level=info msg="Removed pod sandbox: 08cadfc75ed5b8d5f712b2d09326eaf2f3b8ef8aac6734c6ff4179d6343dc336" id=ac0be222-82d8-462d-863d-c7b93a6b70ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 14 00:52:26 addons-885748 crio[965]: time="2024-09-14 00:52:26.813720156Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cbea09be-24ee-4f64-b204-87b24efc618c name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:52:26 addons-885748 crio[965]: time="2024-09-14 00:52:26.813950148Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cbea09be-24ee-4f64-b204-87b24efc618c name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:52:39 addons-885748 crio[965]: time="2024-09-14 00:52:39.813741801Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8e5603ed-d95c-4dc9-944a-2c16a4b0cbaa name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:52:39 addons-885748 crio[965]: time="2024-09-14 00:52:39.814000060Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8e5603ed-d95c-4dc9-944a-2c16a4b0cbaa name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:52:50 addons-885748 crio[965]: time="2024-09-14 00:52:50.813991342Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e6c4fc05-40d9-4cef-8a7a-d6ae3a5f8048 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:52:50 addons-885748 crio[965]: time="2024-09-14 00:52:50.814226766Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e6c4fc05-40d9-4cef-8a7a-d6ae3a5f8048 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:53:05 addons-885748 crio[965]: time="2024-09-14 00:53:05.812996415Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6b90dc49-7d11-4070-99e5-d9df393fdfe7 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:53:05 addons-885748 crio[965]: time="2024-09-14 00:53:05.813224939Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6b90dc49-7d11-4070-99e5-d9df393fdfe7 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:53:17 addons-885748 crio[965]: time="2024-09-14 00:53:17.813918139Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cf3f014a-aac8-4180-8bca-a4ddc626e885 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:53:17 addons-885748 crio[965]: time="2024-09-14 00:53:17.814145153Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cf3f014a-aac8-4180-8bca-a4ddc626e885 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:53:32 addons-885748 crio[965]: time="2024-09-14 00:53:32.813813948Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9c62e71e-2e83-4d3d-bad2-3a6fd5566c71 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:53:32 addons-885748 crio[965]: time="2024-09-14 00:53:32.814057306Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9c62e71e-2e83-4d3d-bad2-3a6fd5566c71 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:53:46 addons-885748 crio[965]: time="2024-09-14 00:53:46.813027435Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dc2a4682-6f7e-40e7-85b2-5ea0764cf088 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:53:46 addons-885748 crio[965]: time="2024-09-14 00:53:46.813288557Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dc2a4682-6f7e-40e7-85b2-5ea0764cf088 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:54:01 addons-885748 crio[965]: time="2024-09-14 00:54:01.813272311Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=204ca8a9-c851-486c-a871-b48adafc1b8a name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:54:01 addons-885748 crio[965]: time="2024-09-14 00:54:01.813513002Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=204ca8a9-c851-486c-a871-b48adafc1b8a name=/runtime.v1.ImageService/ImageStatus
	Sep 14 00:54:11 addons-885748 crio[965]: time="2024-09-14 00:54:11.928972680Z" level=info msg="Stopping container: 7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c (timeout: 30s)" id=02b5dbdb-259f-4f21-9d24-b6b95c54473f name=/runtime.v1.RuntimeService/StopContainer
	Sep 14 00:54:13 addons-885748 crio[965]: time="2024-09-14 00:54:13.100582492Z" level=info msg="Stopped container 7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c: kube-system/metrics-server-84c5f94fbc-96xbg/metrics-server" id=02b5dbdb-259f-4f21-9d24-b6b95c54473f name=/runtime.v1.RuntimeService/StopContainer
	Sep 14 00:54:13 addons-885748 crio[965]: time="2024-09-14 00:54:13.103163484Z" level=info msg="Stopping pod sandbox: bb338c2f32bcdd0a33bf859d43c1e633961b5cb7e6e9121ab9c760fa00d637e1" id=d82a8ae1-0778-43d7-9495-ac5025714458 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 14 00:54:13 addons-885748 crio[965]: time="2024-09-14 00:54:13.103459895Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-96xbg Namespace:kube-system ID:bb338c2f32bcdd0a33bf859d43c1e633961b5cb7e6e9121ab9c760fa00d637e1 UID:9c339307-23c2-46f3-af0b-9a4d12c82b32 NetNS:/var/run/netns/f9f7bdbd-c895-4894-8ce8-ce9c9f9ca77e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 14 00:54:13 addons-885748 crio[965]: time="2024-09-14 00:54:13.103649683Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-96xbg from CNI network \"kindnet\" (type=ptp)"
	Sep 14 00:54:13 addons-885748 crio[965]: time="2024-09-14 00:54:13.131450882Z" level=info msg="Stopped pod sandbox: bb338c2f32bcdd0a33bf859d43c1e633961b5cb7e6e9121ab9c760fa00d637e1" id=d82a8ae1-0778-43d7-9495-ac5025714458 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4bb1da24cf139       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   68d584bcdc7d1       hello-world-app-55bf9c44b4-d9r78
	3b8982f463ba7       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                         5 minutes ago       Running             nginx                     0                   a5764dafff354       nginx
	fc0328f66b9e0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            15 minutes ago      Running             gcp-auth                  0                   d7b70729e47d5       gcp-auth-89d5ffd79-frj5t
	8091d19cac440       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        16 minutes ago      Running             local-path-provisioner    0                   1220f396bbf80       local-path-provisioner-86d989889c-dlghs
	7d56766635b73       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   16 minutes ago      Exited              metrics-server            0                   bb338c2f32bcd       metrics-server-84c5f94fbc-96xbg
	80e8332c931e9       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        17 minutes ago      Running             coredns                   0                   249d9842b4544       coredns-7c65d6cfc9-8m89r
	ebb2e7bdbbfd4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        17 minutes ago      Running             storage-provisioner       0                   105d379cff026       storage-provisioner
	56f7319a8a8d6       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        17 minutes ago      Running             kindnet-cni               0                   0c72d454012fd       kindnet-m55kx
	a47b8e8869ee8       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        17 minutes ago      Running             kube-proxy                0                   ffefb18074c57       kube-proxy-dqs2h
	48d812ac2652a       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        18 minutes ago      Running             kube-apiserver            0                   828ea1cf2ba92       kube-apiserver-addons-885748
	f3056a13deffd       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        18 minutes ago      Running             kube-controller-manager   0                   cc2cb3c49ab23       kube-controller-manager-addons-885748
	d793e5939094c       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        18 minutes ago      Running             kube-scheduler            0                   8aac50a11aa1f       kube-scheduler-addons-885748
	f8b9a437608b9       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        18 minutes ago      Running             etcd                      0                   fecaa719a39f6       etcd-addons-885748
	
	
	==> coredns [80e8332c931e96bf6761851e81369dce2666c6776551b7bcfc78ee52f8150fa9] <==
	[INFO] 10.244.0.12:33235 - 38053 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000139427s
	[INFO] 10.244.0.12:39174 - 20223 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002168352s
	[INFO] 10.244.0.12:39174 - 34553 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001903267s
	[INFO] 10.244.0.12:55989 - 9515 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000126611s
	[INFO] 10.244.0.12:55989 - 28949 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008036s
	[INFO] 10.244.0.12:44725 - 25596 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000196631s
	[INFO] 10.244.0.12:44725 - 58609 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000336263s
	[INFO] 10.244.0.12:37024 - 61418 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000840966s
	[INFO] 10.244.0.12:37024 - 33000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112703s
	[INFO] 10.244.0.12:43586 - 62400 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096252s
	[INFO] 10.244.0.12:43586 - 21956 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000270361s
	[INFO] 10.244.0.12:39958 - 65451 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001701713s
	[INFO] 10.244.0.12:39958 - 44969 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001315145s
	[INFO] 10.244.0.12:45882 - 11582 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000106755s
	[INFO] 10.244.0.12:45882 - 57120 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000056901s
	[INFO] 10.244.0.20:40467 - 22508 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0009463s
	[INFO] 10.244.0.20:47828 - 50659 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016816s
	[INFO] 10.244.0.20:56008 - 60050 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000270443s
	[INFO] 10.244.0.20:57451 - 45764 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000274734s
	[INFO] 10.244.0.20:45104 - 4965 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000161744s
	[INFO] 10.244.0.20:37823 - 38164 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000285663s
	[INFO] 10.244.0.20:38730 - 54617 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003590072s
	[INFO] 10.244.0.20:48720 - 43288 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003891685s
	[INFO] 10.244.0.20:50211 - 8144 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002567145s
	[INFO] 10.244.0.20:33394 - 12183 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002678577s
	
	
	==> describe nodes <==
	Name:               addons-885748
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-885748
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=addons-885748
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_36_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-885748
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:36:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-885748
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:54:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:51:27 +0000   Sat, 14 Sep 2024 00:36:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:51:27 +0000   Sat, 14 Sep 2024 00:36:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:51:27 +0000   Sat, 14 Sep 2024 00:36:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:51:27 +0000   Sat, 14 Sep 2024 00:37:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-885748
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 4359fb52d09b48a99b9422f7ed1aab10
	  System UUID:                97520139-af6f-4519-ad5d-f1e74ef171eb
	  Boot ID:                    fb6d1488-4ff6-49a9-b7dc-0ab0c636005f
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-world-app-55bf9c44b4-d9r78           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  gcp-auth                    gcp-auth-89d5ffd79-frj5t                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-8m89r                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-addons-885748                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-m55kx                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-885748               250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-885748      200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-dqs2h                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-885748               100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  local-path-storage          local-path-provisioner-86d989889c-dlghs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 17m   kube-proxy       
	  Normal   Starting                 17m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m   kubelet          Node addons-885748 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m   kubelet          Node addons-885748 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m   kubelet          Node addons-885748 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m   node-controller  Node addons-885748 event: Registered Node addons-885748 in Controller
	  Normal   NodeReady                17m   kubelet          Node addons-885748 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [f8b9a437608b919f77c00d5ce4eaf25a4169bd8ceac009e52be77d41752449d7] <==
	{"level":"info","ts":"2024-09-14T00:36:11.809593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.809640Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.809693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.809738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T00:36:11.812081Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-885748 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:36:11.812295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:36:11.813737Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:11.813914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:36:11.816988Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:36:11.817279Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:36:11.817309Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:36:11.817872Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:36:11.818699Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:36:11.819114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-14T00:36:11.819222Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:11.821360Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:11.821444Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:36:23.269674Z","caller":"traceutil/trace.go:171","msg":"trace[1517059370] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"215.568054ms","start":"2024-09-14T00:36:23.054082Z","end":"2024-09-14T00:36:23.269650Z","steps":["trace[1517059370] 'process raft request'  (duration: 116.127873ms)","trace[1517059370] 'compare'  (duration: 99.327166ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-14T00:38:00.269469Z","caller":"traceutil/trace.go:171","msg":"trace[1121860251] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"101.073361ms","start":"2024-09-14T00:38:00.168377Z","end":"2024-09-14T00:38:00.269450Z","steps":["trace[1121860251] 'process raft request'  (duration: 92.406917ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-14T00:46:12.426019Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-09-14T00:46:12.459407Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"32.859233ms","hash":3731172354,"current-db-size-bytes":6463488,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3293184,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-14T00:46:12.459464Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3731172354,"revision":1514,"compact-revision":-1}
	{"level":"info","ts":"2024-09-14T00:51:12.431436Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1932}
	{"level":"info","ts":"2024-09-14T00:51:12.448360Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1932,"took":"16.336576ms","hash":2051484435,"current-db-size-bytes":6463488,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":4550656,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-14T00:51:12.448406Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2051484435,"revision":1932,"compact-revision":1514}
	
	
	==> gcp-auth [fc0328f66b9e0b6021b961c2ff50a7c98d37c2056b93b1910e5cac7120024106] <==
	2024/09/14 00:39:02 Ready to write response ...
	2024/09/14 00:39:02 Ready to marshal response ...
	2024/09/14 00:39:02 Ready to write response ...
	2024/09/14 00:47:16 Ready to marshal response ...
	2024/09/14 00:47:16 Ready to write response ...
	2024/09/14 00:47:20 Ready to marshal response ...
	2024/09/14 00:47:20 Ready to write response ...
	2024/09/14 00:47:42 Ready to marshal response ...
	2024/09/14 00:47:42 Ready to write response ...
	2024/09/14 00:48:17 Ready to marshal response ...
	2024/09/14 00:48:17 Ready to write response ...
	2024/09/14 00:48:18 Ready to marshal response ...
	2024/09/14 00:48:18 Ready to write response ...
	2024/09/14 00:48:25 Ready to marshal response ...
	2024/09/14 00:48:25 Ready to write response ...
	2024/09/14 00:48:26 Ready to marshal response ...
	2024/09/14 00:48:26 Ready to write response ...
	2024/09/14 00:48:27 Ready to marshal response ...
	2024/09/14 00:48:27 Ready to write response ...
	2024/09/14 00:48:27 Ready to marshal response ...
	2024/09/14 00:48:27 Ready to write response ...
	2024/09/14 00:48:54 Ready to marshal response ...
	2024/09/14 00:48:54 Ready to write response ...
	2024/09/14 00:51:14 Ready to marshal response ...
	2024/09/14 00:51:14 Ready to write response ...
	
	
	==> kernel <==
	 00:54:13 up  4:36,  0 users,  load average: 0.03, 0.33, 1.40
	Linux addons-885748 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [56f7319a8a8d62442a16c573367b0856c4fe9d7061194116de17211b443cb72d] <==
	I0914 00:52:06.817397       1 main.go:299] handling current node
	I0914 00:52:16.814648       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:52:16.814682       1 main.go:299] handling current node
	I0914 00:52:26.814772       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:52:26.814808       1 main.go:299] handling current node
	I0914 00:52:36.817337       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:52:36.817460       1 main.go:299] handling current node
	I0914 00:52:46.815687       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:52:46.815722       1 main.go:299] handling current node
	I0914 00:52:56.814559       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:52:56.814682       1 main.go:299] handling current node
	I0914 00:53:06.819078       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:53:06.819112       1 main.go:299] handling current node
	I0914 00:53:16.814560       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:53:16.814592       1 main.go:299] handling current node
	I0914 00:53:26.814930       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:53:26.814967       1 main.go:299] handling current node
	I0914 00:53:36.819530       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:53:36.819566       1 main.go:299] handling current node
	I0914 00:53:46.815285       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:53:46.815315       1 main.go:299] handling current node
	I0914 00:53:56.816706       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:53:56.816739       1 main.go:299] handling current node
	I0914 00:54:06.814551       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:54:06.814585       1 main.go:299] handling current node
	
	
	==> kube-apiserver [48d812ac2652afccf8e0a8ebbc6325ba50dee51268df5fad49cb70eb5003a7b4] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0914 00:38:28.081595       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.50.17:443: connect: connection refused" logger="UnhandledError"
	E0914 00:38:28.087158       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.50.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.50.17:443: connect: connection refused" logger="UnhandledError"
	I0914 00:38:28.202850       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0914 00:47:30.583195       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0914 00:47:58.487976       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.488113       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.515016       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.515142       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.531465       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.531532       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.560654       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.560914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 00:47:58.650738       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 00:47:58.651626       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0914 00:47:59.605910       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0914 00:47:59.652010       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 00:47:59.748780       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0914 00:48:26.968761       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.170.149"}
	I0914 00:48:48.270011       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0914 00:48:49.312565       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0914 00:48:53.824100       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0914 00:48:54.148868       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.26.185"}
	I0914 00:51:14.321212       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.7.199"}
	
	
	==> kube-controller-manager [f3056a13deffd320bbd691e5f22c5f5dbbaaccd4aa9e8b80fa49cf42d2ab29da] <==
	W0914 00:52:09.294589       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:52:09.294628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:52:14.887605       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:52:14.887656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:52:25.133969       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:52:25.134012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:52:34.877762       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:52:34.877808       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:53:05.674471       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:53:05.674608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:53:05.694314       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:53:05.694356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:53:14.027608       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:53:14.027649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:53:23.784289       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:53:23.784335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:53:37.408449       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:53:37.408492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:53:56.331718       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:53:56.331762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 00:54:06.216456       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:54:06.216498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 00:54:11.893310       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.091µs"
	W0914 00:54:13.403345       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 00:54:13.403464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [a47b8e8869ee8549a1369a994607a2886627e803c8cd608f2e9ae96584f25887] <==
	I0914 00:36:28.440930       1 server_linux.go:66] "Using iptables proxy"
	I0914 00:36:28.793093       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0914 00:36:28.800749       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:36:28.881787       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 00:36:28.881853       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:36:28.885486       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:36:28.886040       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:36:28.886064       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:36:28.895020       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:36:28.895563       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:36:28.895926       1 config.go:199] "Starting service config controller"
	I0914 00:36:28.895990       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:36:28.896039       1 config.go:328] "Starting node config controller"
	I0914 00:36:28.896085       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:36:28.997049       1 shared_informer.go:320] Caches are synced for node config
	I0914 00:36:28.997094       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:36:28.997136       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d793e5939094cbc162b91d2b144161d466b8af2b0b40c176667ef631d5946289] <==
	W0914 00:36:15.114600       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 00:36:15.115262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:15.991953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 00:36:15.991999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.008831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 00:36:16.008904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.013935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 00:36:16.013978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.023529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 00:36:16.023578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.064172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 00:36:16.064214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.099965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 00:36:16.100011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.138383       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 00:36:16.138445       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 00:36:16.186067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 00:36:16.186184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.187287       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 00:36:16.187385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.203065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 00:36:16.203112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 00:36:16.215830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 00:36:16.215921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0914 00:36:18.915639       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 00:53:18 addons-885748 kubelet[1502]: E0914 00:53:18.219753    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275198219476636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:28 addons-885748 kubelet[1502]: E0914 00:53:28.222612    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275208222380622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:28 addons-885748 kubelet[1502]: E0914 00:53:28.222651    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275208222380622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:32 addons-885748 kubelet[1502]: E0914 00:53:32.814319    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="64665afd-5894-48bd-a4bb-693ba380ced0"
	Sep 14 00:53:38 addons-885748 kubelet[1502]: E0914 00:53:38.225312    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275218225047830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:38 addons-885748 kubelet[1502]: E0914 00:53:38.225348    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275218225047830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:46 addons-885748 kubelet[1502]: E0914 00:53:46.813745    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="64665afd-5894-48bd-a4bb-693ba380ced0"
	Sep 14 00:53:48 addons-885748 kubelet[1502]: E0914 00:53:48.227621    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275228227418599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:48 addons-885748 kubelet[1502]: E0914 00:53:48.227656    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275228227418599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:58 addons-885748 kubelet[1502]: E0914 00:53:58.229977    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275238229746516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:53:58 addons-885748 kubelet[1502]: E0914 00:53:58.230013    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275238229746516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:54:01 addons-885748 kubelet[1502]: E0914 00:54:01.814035    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="64665afd-5894-48bd-a4bb-693ba380ced0"
	Sep 14 00:54:08 addons-885748 kubelet[1502]: E0914 00:54:08.232348    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275248232067877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:54:08 addons-885748 kubelet[1502]: E0914 00:54:08.232384    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726275248232067877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572278,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 00:54:13 addons-885748 kubelet[1502]: I0914 00:54:13.279827    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk7mz\" (UniqueName: \"kubernetes.io/projected/9c339307-23c2-46f3-af0b-9a4d12c82b32-kube-api-access-rk7mz\") pod \"9c339307-23c2-46f3-af0b-9a4d12c82b32\" (UID: \"9c339307-23c2-46f3-af0b-9a4d12c82b32\") "
	Sep 14 00:54:13 addons-885748 kubelet[1502]: I0914 00:54:13.279889    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9c339307-23c2-46f3-af0b-9a4d12c82b32-tmp-dir\") pod \"9c339307-23c2-46f3-af0b-9a4d12c82b32\" (UID: \"9c339307-23c2-46f3-af0b-9a4d12c82b32\") "
	Sep 14 00:54:13 addons-885748 kubelet[1502]: I0914 00:54:13.280257    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c339307-23c2-46f3-af0b-9a4d12c82b32-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "9c339307-23c2-46f3-af0b-9a4d12c82b32" (UID: "9c339307-23c2-46f3-af0b-9a4d12c82b32"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 14 00:54:13 addons-885748 kubelet[1502]: I0914 00:54:13.286011    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c339307-23c2-46f3-af0b-9a4d12c82b32-kube-api-access-rk7mz" (OuterVolumeSpecName: "kube-api-access-rk7mz") pod "9c339307-23c2-46f3-af0b-9a4d12c82b32" (UID: "9c339307-23c2-46f3-af0b-9a4d12c82b32"). InnerVolumeSpecName "kube-api-access-rk7mz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 00:54:13 addons-885748 kubelet[1502]: I0914 00:54:13.380643    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rk7mz\" (UniqueName: \"kubernetes.io/projected/9c339307-23c2-46f3-af0b-9a4d12c82b32-kube-api-access-rk7mz\") on node \"addons-885748\" DevicePath \"\""
	Sep 14 00:54:13 addons-885748 kubelet[1502]: I0914 00:54:13.380692    1502 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9c339307-23c2-46f3-af0b-9a4d12c82b32-tmp-dir\") on node \"addons-885748\" DevicePath \"\""
	Sep 14 00:54:13 addons-885748 kubelet[1502]: I0914 00:54:13.616715    1502 scope.go:117] "RemoveContainer" containerID="7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c"
	Sep 14 00:54:13 addons-885748 kubelet[1502]: I0914 00:54:13.645940    1502 scope.go:117] "RemoveContainer" containerID="7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c"
	Sep 14 00:54:13 addons-885748 kubelet[1502]: E0914 00:54:13.647694    1502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c\": container with ID starting with 7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c not found: ID does not exist" containerID="7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c"
	Sep 14 00:54:13 addons-885748 kubelet[1502]: I0914 00:54:13.647753    1502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c"} err="failed to get container status \"7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c\": rpc error: code = NotFound desc = could not find container \"7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c\": container with ID starting with 7d56766635b7331716344237672bdcc7182d2cae41bbe2bae17b4a6de395a40c not found: ID does not exist"
	Sep 14 00:54:13 addons-885748 kubelet[1502]: I0914 00:54:13.814924    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c339307-23c2-46f3-af0b-9a4d12c82b32" path="/var/lib/kubelet/pods/9c339307-23c2-46f3-af0b-9a4d12c82b32/volumes"
	
	
	==> storage-provisioner [ebb2e7bdbbfd4e15a1df8147f8ab8e288ada7a8b4fb1482db8fd01effcb11eef] <==
	I0914 00:37:07.958850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 00:37:07.973989       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 00:37:07.974042       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 00:37:07.989411       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 00:37:07.989599       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-885748_49a32771-25bb-4138-a77d-2ab9ae6fe758!
	I0914 00:37:07.993894       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1f3463c-ac9e-45b9-aadc-bdd81184edd4", APIVersion:"v1", ResourceVersion:"870", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-885748_49a32771-25bb-4138-a77d-2ab9ae6fe758 became leader
	I0914 00:37:08.090873       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-885748_49a32771-25bb-4138-a77d-2ab9ae6fe758!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-885748 -n addons-885748
helpers_test.go:261: (dbg) Run:  kubectl --context addons-885748 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-885748 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-885748 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-885748/192.168.49.2
	Start Time:       Sat, 14 Sep 2024 00:39:02 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9c6mj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9c6mj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  15m                  default-scheduler  Successfully assigned default/busybox to addons-885748
	  Normal   Pulling    13m (x4 over 15m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     13m (x4 over 15m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     13m (x4 over 15m)    kubelet            Error: ErrImagePull
	  Warning  Failed     13m (x6 over 15m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    5m7s (x43 over 15m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (348.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (128.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-401927 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0914 01:07:54.331454  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:09:02.896498  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-401927 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m3.593709414s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-401927       NotReady   control-plane   10m     v1.31.1
	ha-401927-m02   Ready      control-plane   10m     v1.31.1
	ha-401927-m04   Ready      <none>          7m50s   v1.31.1

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-401927
helpers_test.go:235: (dbg) docker inspect ha-401927:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a653e1e66e311cc3f4d67545172636f17a5f1830f207f407847e9b3124791583",
	        "Created": "2024-09-14T00:58:28.220556984Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935287,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-14T01:07:28.599140002Z",
	            "FinishedAt": "2024-09-14T01:07:27.697331464Z"
	        },
	        "Image": "sha256:fe3365929e6ce54b4c06f0bc3d1500dff08f535844ef4978f2c45cd67c542134",
	        "ResolvConfPath": "/var/lib/docker/containers/a653e1e66e311cc3f4d67545172636f17a5f1830f207f407847e9b3124791583/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a653e1e66e311cc3f4d67545172636f17a5f1830f207f407847e9b3124791583/hostname",
	        "HostsPath": "/var/lib/docker/containers/a653e1e66e311cc3f4d67545172636f17a5f1830f207f407847e9b3124791583/hosts",
	        "LogPath": "/var/lib/docker/containers/a653e1e66e311cc3f4d67545172636f17a5f1830f207f407847e9b3124791583/a653e1e66e311cc3f4d67545172636f17a5f1830f207f407847e9b3124791583-json.log",
	        "Name": "/ha-401927",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-401927:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-401927",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/125154ee68a2c08e62d22073e4f6a7164477f12083d0c1daf6e1c7c42bbc979c-init/diff:/var/lib/docker/overlay2/75b2121147f32424fffc5e50d2609c96cf2fdc411273d8660afbb09b8a3ad07a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/125154ee68a2c08e62d22073e4f6a7164477f12083d0c1daf6e1c7c42bbc979c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/125154ee68a2c08e62d22073e4f6a7164477f12083d0c1daf6e1c7c42bbc979c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/125154ee68a2c08e62d22073e4f6a7164477f12083d0c1daf6e1c7c42bbc979c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-401927",
	                "Source": "/var/lib/docker/volumes/ha-401927/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-401927",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-401927",
	                "name.minikube.sigs.k8s.io": "ha-401927",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d35ee97d65417046e1180270a0128cd09bf653f18464bec27d54ecf00806fdc",
	            "SandboxKey": "/var/run/docker/netns/1d35ee97d654",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33624"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33625"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33628"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33626"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33627"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-401927": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e97df3d26894448e672d333646b370a06a47464c3c522ec99ddbbb8c237cea77",
	                    "EndpointID": "1cb7f0eb6e508c771b6de5c2f9aa6b4fee7ed7a33959d9009e7aee455d36194f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-401927",
	                        "a653e1e66e31"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-401927 -n ha-401927
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-401927 logs -n 25: (2.085242668s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-401927 cp ha-401927-m03:/home/docker/cp-test.txt                              | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927-m04:/home/docker/cp-test_ha-401927-m03_ha-401927-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-401927 ssh -n                                                                 | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-401927 ssh -n ha-401927-m04 sudo cat                                          | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | /home/docker/cp-test_ha-401927-m03_ha-401927-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-401927 cp testdata/cp-test.txt                                                | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-401927 ssh -n                                                                 | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-401927 cp ha-401927-m04:/home/docker/cp-test.txt                              | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1688928420/001/cp-test_ha-401927-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-401927 ssh -n                                                                 | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-401927 cp ha-401927-m04:/home/docker/cp-test.txt                              | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927:/home/docker/cp-test_ha-401927-m04_ha-401927.txt                       |           |         |         |                     |                     |
	| ssh     | ha-401927 ssh -n                                                                 | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-401927 ssh -n ha-401927 sudo cat                                              | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | /home/docker/cp-test_ha-401927-m04_ha-401927.txt                                 |           |         |         |                     |                     |
	| cp      | ha-401927 cp ha-401927-m04:/home/docker/cp-test.txt                              | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927-m02:/home/docker/cp-test_ha-401927-m04_ha-401927-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-401927 ssh -n                                                                 | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-401927 ssh -n ha-401927-m02 sudo cat                                          | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | /home/docker/cp-test_ha-401927-m04_ha-401927-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-401927 cp ha-401927-m04:/home/docker/cp-test.txt                              | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927-m03:/home/docker/cp-test_ha-401927-m04_ha-401927-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-401927 ssh -n                                                                 | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | ha-401927-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-401927 ssh -n ha-401927-m03 sudo cat                                          | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | /home/docker/cp-test_ha-401927-m04_ha-401927-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-401927 node stop m02 -v=7                                                     | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-401927 node start m02 -v=7                                                    | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:02 UTC | 14 Sep 24 01:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-401927 -v=7                                                           | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-401927 -v=7                                                                | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:03 UTC | 14 Sep 24 01:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-401927 --wait=true -v=7                                                    | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:03 UTC | 14 Sep 24 01:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-401927                                                                | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:06 UTC |                     |
	| node    | ha-401927 node delete m03 -v=7                                                   | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:06 UTC | 14 Sep 24 01:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-401927 stop -v=7                                                              | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:06 UTC | 14 Sep 24 01:07 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-401927 --wait=true                                                         | ha-401927 | jenkins | v1.34.0 | 14 Sep 24 01:07 UTC | 14 Sep 24 01:09 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 01:07:28
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 01:07:28.128592  935082 out.go:345] Setting OutFile to fd 1 ...
	I0914 01:07:28.129018  935082 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:07:28.129033  935082 out.go:358] Setting ErrFile to fd 2...
	I0914 01:07:28.129039  935082 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:07:28.129361  935082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 01:07:28.129783  935082 out.go:352] Setting JSON to false
	I0914 01:07:28.130680  935082 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17393,"bootTime":1726258656,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0914 01:07:28.130764  935082 start.go:139] virtualization:  
	I0914 01:07:28.134269  935082 out.go:177] * [ha-401927] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 01:07:28.137662  935082 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 01:07:28.137752  935082 notify.go:220] Checking for updates...
	I0914 01:07:28.142952  935082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 01:07:28.145647  935082 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 01:07:28.148250  935082 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	I0914 01:07:28.150898  935082 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 01:07:28.153550  935082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 01:07:28.156672  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:28.157207  935082 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 01:07:28.185414  935082 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 01:07:28.185553  935082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 01:07:28.248402  935082 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-14 01:07:28.238814999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 01:07:28.248528  935082 docker.go:318] overlay module found
	I0914 01:07:28.251329  935082 out.go:177] * Using the docker driver based on existing profile
	I0914 01:07:28.253844  935082 start.go:297] selected driver: docker
	I0914 01:07:28.253884  935082 start.go:901] validating driver "docker" against &{Name:ha-401927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-401927 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:07:28.254049  935082 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 01:07:28.254151  935082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 01:07:28.304926  935082 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-14 01:07:28.295509639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 01:07:28.305471  935082 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:07:28.305504  935082 cni.go:84] Creating CNI manager for ""
	I0914 01:07:28.305548  935082 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0914 01:07:28.305601  935082 start.go:340] cluster config:
	{Name:ha-401927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-401927 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0914 01:07:28.308643  935082 out.go:177] * Starting "ha-401927" primary control-plane node in "ha-401927" cluster
	I0914 01:07:28.311553  935082 cache.go:121] Beginning downloading kic base image for docker with crio
	I0914 01:07:28.314202  935082 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0914 01:07:28.316806  935082 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:07:28.316886  935082 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0914 01:07:28.316898  935082 cache.go:56] Caching tarball of preloaded images
	I0914 01:07:28.316898  935082 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 01:07:28.316981  935082 preload.go:172] Found /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 01:07:28.317003  935082 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 01:07:28.317145  935082 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/config.json ...
	W0914 01:07:28.336048  935082 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 is of wrong architecture
	I0914 01:07:28.336070  935082 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 01:07:28.336151  935082 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 01:07:28.336174  935082 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 01:07:28.336180  935082 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 01:07:28.336188  935082 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 01:07:28.336194  935082 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0914 01:07:28.337660  935082 image.go:273] response: 
	I0914 01:07:28.459730  935082 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0914 01:07:28.459771  935082 cache.go:194] Successfully downloaded all kic artifacts
	I0914 01:07:28.459803  935082 start.go:360] acquireMachinesLock for ha-401927: {Name:mk6a012ce5a6ac8d6f85d7962b06771e9d393fee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:07:28.459888  935082 start.go:364] duration metric: took 50.567µs to acquireMachinesLock for "ha-401927"
	I0914 01:07:28.459912  935082 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:07:28.459920  935082 fix.go:54] fixHost starting: 
	I0914 01:07:28.460198  935082 cli_runner.go:164] Run: docker container inspect ha-401927 --format={{.State.Status}}
	I0914 01:07:28.475941  935082 fix.go:112] recreateIfNeeded on ha-401927: state=Stopped err=<nil>
	W0914 01:07:28.475970  935082 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:07:28.479254  935082 out.go:177] * Restarting existing docker container for "ha-401927" ...
	I0914 01:07:28.481773  935082 cli_runner.go:164] Run: docker start ha-401927
	I0914 01:07:28.808675  935082 cli_runner.go:164] Run: docker container inspect ha-401927 --format={{.State.Status}}
	I0914 01:07:28.836682  935082 kic.go:430] container "ha-401927" state is running.
	I0914 01:07:28.837099  935082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927
	I0914 01:07:28.860318  935082 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/config.json ...
	I0914 01:07:28.860559  935082 machine.go:93] provisionDockerMachine start ...
	I0914 01:07:28.860627  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:07:28.887156  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:07:28.887493  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33624 <nil> <nil>}
	I0914 01:07:28.887507  935082 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:07:28.888147  935082 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0914 01:07:32.012918  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401927
	
	I0914 01:07:32.012952  935082 ubuntu.go:169] provisioning hostname "ha-401927"
	I0914 01:07:32.013030  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:07:32.032556  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:07:32.032847  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33624 <nil> <nil>}
	I0914 01:07:32.032871  935082 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-401927 && echo "ha-401927" | sudo tee /etc/hostname
	I0914 01:07:32.165514  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401927
	
	I0914 01:07:32.165610  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:07:32.183522  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:07:32.183795  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33624 <nil> <nil>}
	I0914 01:07:32.183820  935082 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-401927' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-401927/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-401927' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:07:32.305312  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:07:32.305336  935082 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-868698/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-868698/.minikube}
	I0914 01:07:32.305367  935082 ubuntu.go:177] setting up certificates
	I0914 01:07:32.305378  935082 provision.go:84] configureAuth start
	I0914 01:07:32.305437  935082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927
	I0914 01:07:32.322229  935082 provision.go:143] copyHostCerts
	I0914 01:07:32.322276  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem
	I0914 01:07:32.322311  935082 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem, removing ...
	I0914 01:07:32.322322  935082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem
	I0914 01:07:32.322397  935082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem (1078 bytes)
	I0914 01:07:32.322495  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem
	I0914 01:07:32.322519  935082 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem, removing ...
	I0914 01:07:32.322528  935082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem
	I0914 01:07:32.322559  935082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem (1123 bytes)
	I0914 01:07:32.322607  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem
	I0914 01:07:32.322634  935082 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem, removing ...
	I0914 01:07:32.322642  935082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem
	I0914 01:07:32.322667  935082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem (1679 bytes)
	I0914 01:07:32.322718  935082 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem org=jenkins.ha-401927 san=[127.0.0.1 192.168.49.2 ha-401927 localhost minikube]
	I0914 01:07:32.689989  935082 provision.go:177] copyRemoteCerts
	I0914 01:07:32.690150  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:07:32.690198  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:07:32.706813  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33624 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927/id_rsa Username:docker}
	I0914 01:07:32.794156  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 01:07:32.794218  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:07:32.818817  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 01:07:32.818881  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0914 01:07:32.843349  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 01:07:32.843413  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 01:07:32.867780  935082 provision.go:87] duration metric: took 562.38774ms to configureAuth
	I0914 01:07:32.867805  935082 ubuntu.go:193] setting minikube options for container-runtime
	I0914 01:07:32.868063  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:32.868175  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:07:32.885286  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:07:32.885533  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33624 <nil> <nil>}
	I0914 01:07:32.885560  935082 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:07:33.326699  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:07:33.326722  935082 machine.go:96] duration metric: took 4.466153397s to provisionDockerMachine
	I0914 01:07:33.326735  935082 start.go:293] postStartSetup for "ha-401927" (driver="docker")
	I0914 01:07:33.326747  935082 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:07:33.326861  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:07:33.326918  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:07:33.347444  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33624 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927/id_rsa Username:docker}
	I0914 01:07:33.442133  935082 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:07:33.445276  935082 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 01:07:33.445314  935082 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 01:07:33.445342  935082 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 01:07:33.445355  935082 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 01:07:33.445366  935082 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/addons for local assets ...
	I0914 01:07:33.445432  935082 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/files for local assets ...
	I0914 01:07:33.445522  935082 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem -> 8740792.pem in /etc/ssl/certs
	I0914 01:07:33.445533  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem -> /etc/ssl/certs/8740792.pem
	I0914 01:07:33.445636  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:07:33.454015  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem --> /etc/ssl/certs/8740792.pem (1708 bytes)
	I0914 01:07:33.478316  935082 start.go:296] duration metric: took 151.566235ms for postStartSetup
	I0914 01:07:33.478428  935082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 01:07:33.478499  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:07:33.495100  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33624 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927/id_rsa Username:docker}
	I0914 01:07:33.578348  935082 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 01:07:33.582804  935082 fix.go:56] duration metric: took 5.122876475s for fixHost
	I0914 01:07:33.582831  935082 start.go:83] releasing machines lock for "ha-401927", held for 5.122931053s
	I0914 01:07:33.582907  935082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927
	I0914 01:07:33.599401  935082 ssh_runner.go:195] Run: cat /version.json
	I0914 01:07:33.599455  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:07:33.599724  935082 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:07:33.599777  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:07:33.618693  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33624 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927/id_rsa Username:docker}
	I0914 01:07:33.620424  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33624 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927/id_rsa Username:docker}
	I0914 01:07:33.704763  935082 ssh_runner.go:195] Run: systemctl --version
	I0914 01:07:33.837514  935082 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:07:33.978871  935082 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 01:07:33.983904  935082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:07:33.992691  935082 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 01:07:33.992765  935082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:07:34.003805  935082 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 01:07:34.003904  935082 start.go:495] detecting cgroup driver to use...
	I0914 01:07:34.003979  935082 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 01:07:34.004039  935082 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:07:34.017464  935082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:07:34.029581  935082 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:07:34.029647  935082 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:07:34.043099  935082 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:07:34.055936  935082 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:07:34.146452  935082 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:07:34.235051  935082 docker.go:233] disabling docker service ...
	I0914 01:07:34.235169  935082 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:07:34.247840  935082 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:07:34.259797  935082 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:07:34.356385  935082 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:07:34.450449  935082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:07:34.462314  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:07:34.479427  935082 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:07:34.479518  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:34.490240  935082 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:07:34.490329  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:34.500227  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:34.510357  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:34.520368  935082 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:07:34.530220  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:34.540391  935082 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:34.550487  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:34.560515  935082 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:07:34.569226  935082 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:07:34.578157  935082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:34.675178  935082 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:07:34.790852  935082 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:07:34.791310  935082 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:07:34.800641  935082 start.go:563] Will wait 60s for crictl version
	I0914 01:07:34.800791  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:07:34.804431  935082 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:07:34.848460  935082 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 01:07:34.848577  935082 ssh_runner.go:195] Run: crio --version
	I0914 01:07:34.897223  935082 ssh_runner.go:195] Run: crio --version
	I0914 01:07:34.938288  935082 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0914 01:07:34.941057  935082 cli_runner.go:164] Run: docker network inspect ha-401927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 01:07:34.956766  935082 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 01:07:34.960609  935082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:07:34.971504  935082 kubeadm.go:883] updating cluster {Name:ha-401927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-401927 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:07:34.971669  935082 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:07:34.971728  935082 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:07:35.020967  935082 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:07:35.020993  935082 crio.go:433] Images already preloaded, skipping extraction
	I0914 01:07:35.021057  935082 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:07:35.059246  935082 crio.go:514] all images are preloaded for cri-o runtime.
	I0914 01:07:35.059273  935082 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:07:35.059284  935082 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0914 01:07:35.059398  935082 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-401927 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-401927 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:07:35.059490  935082 ssh_runner.go:195] Run: crio config
	I0914 01:07:35.110724  935082 cni.go:84] Creating CNI manager for ""
	I0914 01:07:35.110756  935082 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0914 01:07:35.110765  935082 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:07:35.110796  935082 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-401927 NodeName:ha-401927 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:07:35.110957  935082 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-401927"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:07:35.110979  935082 kube-vip.go:115] generating kube-vip config ...
	I0914 01:07:35.111045  935082 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0914 01:07:35.124248  935082 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 01:07:35.124365  935082 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0914 01:07:35.124430  935082 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:07:35.133409  935082 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:07:35.133526  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0914 01:07:35.143308  935082 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0914 01:07:35.162625  935082 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:07:35.181630  935082 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0914 01:07:35.201410  935082 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0914 01:07:35.219928  935082 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0914 01:07:35.223710  935082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:07:35.235333  935082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:35.323442  935082 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:35.337892  935082 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927 for IP: 192.168.49.2
	I0914 01:07:35.337966  935082 certs.go:194] generating shared ca certs ...
	I0914 01:07:35.337995  935082 certs.go:226] acquiring lock for ca certs: {Name:mk51aad7f25871620dee3805dbb159a74d927d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:35.338185  935082 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key
	I0914 01:07:35.338256  935082 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key
	I0914 01:07:35.338290  935082 certs.go:256] generating profile certs ...
	I0914 01:07:35.338410  935082 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/client.key
	I0914 01:07:35.338458  935082 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.key.5f569a09
	I0914 01:07:35.338490  935082 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.crt.5f569a09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0914 01:07:35.697230  935082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.crt.5f569a09 ...
	I0914 01:07:35.697277  935082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.crt.5f569a09: {Name:mk16e048f75a61b72e15f8752c203d181c2cc2ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:35.697507  935082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.key.5f569a09 ...
	I0914 01:07:35.697525  935082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.key.5f569a09: {Name:mk6fada8416e4b109fd7c6f5b8562c2bb0ae99b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:35.697624  935082 certs.go:381] copying /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.crt.5f569a09 -> /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.crt
	I0914 01:07:35.697782  935082 certs.go:385] copying /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.key.5f569a09 -> /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.key
	I0914 01:07:35.697924  935082 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/proxy-client.key
	I0914 01:07:35.697943  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 01:07:35.697959  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 01:07:35.697976  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 01:07:35.697994  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 01:07:35.698009  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 01:07:35.698025  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 01:07:35.698040  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 01:07:35.698055  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 01:07:35.698111  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079.pem (1338 bytes)
	W0914 01:07:35.698144  935082 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079_empty.pem, impossibly tiny 0 bytes
	I0914 01:07:35.698156  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 01:07:35.698181  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:07:35.698206  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:07:35.698231  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem (1679 bytes)
	I0914 01:07:35.698278  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem (1708 bytes)
	I0914 01:07:35.698312  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem -> /usr/share/ca-certificates/8740792.pem
	I0914 01:07:35.698328  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:07:35.698339  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079.pem -> /usr/share/ca-certificates/874079.pem
	I0914 01:07:35.698982  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:07:35.724069  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 01:07:35.750022  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:07:35.774393  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:07:35.798372  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:07:35.822791  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:07:35.847247  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:07:35.871496  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:07:35.895525  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem --> /usr/share/ca-certificates/8740792.pem (1708 bytes)
	I0914 01:07:35.919735  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:07:35.944434  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079.pem --> /usr/share/ca-certificates/874079.pem (1338 bytes)
	I0914 01:07:35.969363  935082 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:07:35.987787  935082 ssh_runner.go:195] Run: openssl version
	I0914 01:07:35.993287  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8740792.pem && ln -fs /usr/share/ca-certificates/8740792.pem /etc/ssl/certs/8740792.pem"
	I0914 01:07:36.006229  935082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8740792.pem
	I0914 01:07:36.013925  935082 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 00:55 /usr/share/ca-certificates/8740792.pem
	I0914 01:07:36.014026  935082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8740792.pem
	I0914 01:07:36.021630  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8740792.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:07:36.031792  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:07:36.041746  935082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:07:36.045466  935082 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 00:35 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:07:36.045538  935082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:07:36.052865  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:07:36.062750  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/874079.pem && ln -fs /usr/share/ca-certificates/874079.pem /etc/ssl/certs/874079.pem"
	I0914 01:07:36.072886  935082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/874079.pem
	I0914 01:07:36.076617  935082 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 00:55 /usr/share/ca-certificates/874079.pem
	I0914 01:07:36.076687  935082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/874079.pem
	I0914 01:07:36.084038  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/874079.pem /etc/ssl/certs/51391683.0"
	I0914 01:07:36.093750  935082 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:07:36.097468  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:07:36.104606  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:07:36.112424  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:07:36.119718  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:07:36.126913  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:07:36.134094  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:07:36.141885  935082 kubeadm.go:392] StartCluster: {Name:ha-401927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-401927 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:07:36.142040  935082 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0914 01:07:36.142106  935082 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:07:36.182223  935082 cri.go:89] found id: ""
	I0914 01:07:36.182324  935082 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:07:36.191261  935082 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:07:36.191284  935082 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:07:36.191363  935082 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:07:36.199749  935082 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:07:36.200214  935082 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-401927" does not appear in /home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 01:07:36.200328  935082 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-868698/kubeconfig needs updating (will repair): [kubeconfig missing "ha-401927" cluster setting kubeconfig missing "ha-401927" context setting]
	I0914 01:07:36.200601  935082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/kubeconfig: {Name:mk4bce51b3b1a0b5e086688a43a01615410b8350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:36.201025  935082 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 01:07:36.201303  935082 kapi.go:59] client config for ha-401927: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/client.key", CAFile:"/home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1e6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0914 01:07:36.201785  935082 cert_rotation.go:140] Starting client certificate rotation controller
	I0914 01:07:36.201972  935082 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:07:36.210650  935082 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0914 01:07:36.210671  935082 kubeadm.go:597] duration metric: took 19.381722ms to restartPrimaryControlPlane
	I0914 01:07:36.210681  935082 kubeadm.go:394] duration metric: took 68.8067ms to StartCluster
	I0914 01:07:36.210715  935082 settings.go:142] acquiring lock: {Name:mk58b1b9b697202ac4a931cd839962dd8a5a8fe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:36.210778  935082 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 01:07:36.211423  935082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-868698/kubeconfig: {Name:mk4bce51b3b1a0b5e086688a43a01615410b8350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:36.211624  935082 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:36.211650  935082 start.go:241] waiting for startup goroutines ...
	I0914 01:07:36.211665  935082 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:07:36.212111  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:36.215608  935082 out.go:177] * Enabled addons: 
	I0914 01:07:36.218229  935082 addons.go:510] duration metric: took 6.56705ms for enable addons: enabled=[]
	I0914 01:07:36.218263  935082 start.go:246] waiting for cluster config update ...
	I0914 01:07:36.218273  935082 start.go:255] writing updated cluster config ...
	I0914 01:07:36.221180  935082 out.go:201] 
	I0914 01:07:36.223930  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:36.224040  935082 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/config.json ...
	I0914 01:07:36.226882  935082 out.go:177] * Starting "ha-401927-m02" control-plane node in "ha-401927" cluster
	I0914 01:07:36.229470  935082 cache.go:121] Beginning downloading kic base image for docker with crio
	I0914 01:07:36.232146  935082 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0914 01:07:36.234696  935082 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:07:36.234719  935082 cache.go:56] Caching tarball of preloaded images
	I0914 01:07:36.234786  935082 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 01:07:36.234837  935082 preload.go:172] Found /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 01:07:36.234853  935082 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 01:07:36.234980  935082 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/config.json ...
	W0914 01:07:36.253409  935082 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 is of wrong architecture
	I0914 01:07:36.253429  935082 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 01:07:36.253512  935082 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 01:07:36.253536  935082 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 01:07:36.253542  935082 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 01:07:36.253550  935082 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 01:07:36.253556  935082 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0914 01:07:36.254758  935082 image.go:273] response: 
	I0914 01:07:36.378030  935082 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0914 01:07:36.378071  935082 cache.go:194] Successfully downloaded all kic artifacts
	I0914 01:07:36.378102  935082 start.go:360] acquireMachinesLock for ha-401927-m02: {Name:mkdaf9ea22c4677c5344abf3810ef74be090ede7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:07:36.378182  935082 start.go:364] duration metric: took 55.908µs to acquireMachinesLock for "ha-401927-m02"
	I0914 01:07:36.378207  935082 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:07:36.378213  935082 fix.go:54] fixHost starting: m02
	I0914 01:07:36.378495  935082 cli_runner.go:164] Run: docker container inspect ha-401927-m02 --format={{.State.Status}}
	I0914 01:07:36.394496  935082 fix.go:112] recreateIfNeeded on ha-401927-m02: state=Stopped err=<nil>
	W0914 01:07:36.394524  935082 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:07:36.397550  935082 out.go:177] * Restarting existing docker container for "ha-401927-m02" ...
	I0914 01:07:36.400160  935082 cli_runner.go:164] Run: docker start ha-401927-m02
	I0914 01:07:36.704369  935082 cli_runner.go:164] Run: docker container inspect ha-401927-m02 --format={{.State.Status}}
	I0914 01:07:36.724206  935082 kic.go:430] container "ha-401927-m02" state is running.
	I0914 01:07:36.724563  935082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927-m02
	I0914 01:07:36.748519  935082 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/config.json ...
	I0914 01:07:36.748761  935082 machine.go:93] provisionDockerMachine start ...
	I0914 01:07:36.748817  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m02
	I0914 01:07:36.766969  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:07:36.767214  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33629 <nil> <nil>}
	I0914 01:07:36.767230  935082 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:07:36.767824  935082 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0914 01:07:39.943018  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401927-m02
	
	I0914 01:07:39.943100  935082 ubuntu.go:169] provisioning hostname "ha-401927-m02"
	I0914 01:07:39.943204  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m02
	I0914 01:07:39.965416  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:07:39.965719  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33629 <nil> <nil>}
	I0914 01:07:39.965755  935082 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-401927-m02 && echo "ha-401927-m02" | sudo tee /etc/hostname
	I0914 01:07:40.172063  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401927-m02
	
	I0914 01:07:40.172208  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m02
	I0914 01:07:40.226190  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:07:40.226431  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33629 <nil> <nil>}
	I0914 01:07:40.226447  935082 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-401927-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-401927-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-401927-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:07:40.402319  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:07:40.402392  935082 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-868698/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-868698/.minikube}
	I0914 01:07:40.402424  935082 ubuntu.go:177] setting up certificates
	I0914 01:07:40.402445  935082 provision.go:84] configureAuth start
	I0914 01:07:40.402557  935082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927-m02
	I0914 01:07:40.432542  935082 provision.go:143] copyHostCerts
	I0914 01:07:40.432582  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem
	I0914 01:07:40.432615  935082 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem, removing ...
	I0914 01:07:40.432622  935082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem
	I0914 01:07:40.432700  935082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem (1078 bytes)
	I0914 01:07:40.432780  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem
	I0914 01:07:40.432799  935082 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem, removing ...
	I0914 01:07:40.432803  935082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem
	I0914 01:07:40.432831  935082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem (1123 bytes)
	I0914 01:07:40.432869  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem
	I0914 01:07:40.432886  935082 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem, removing ...
	I0914 01:07:40.432890  935082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem
	I0914 01:07:40.432928  935082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem (1679 bytes)
	I0914 01:07:40.432981  935082 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem org=jenkins.ha-401927-m02 san=[127.0.0.1 192.168.49.3 ha-401927-m02 localhost minikube]
	I0914 01:07:41.741234  935082 provision.go:177] copyRemoteCerts
	I0914 01:07:41.741425  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:07:41.741492  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m02
	I0914 01:07:41.771376  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33629 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m02/id_rsa Username:docker}
	I0914 01:07:41.919185  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 01:07:41.919248  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:07:42.031412  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 01:07:42.031480  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 01:07:42.100769  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 01:07:42.100845  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:07:42.224773  935082 provision.go:87] duration metric: took 1.822284836s to configureAuth
	I0914 01:07:42.224849  935082 ubuntu.go:193] setting minikube options for container-runtime
	I0914 01:07:42.225163  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:42.225352  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m02
	I0914 01:07:42.255406  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:07:42.255675  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33629 <nil> <nil>}
	I0914 01:07:42.255693  935082 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:07:42.953382  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:07:42.953469  935082 machine.go:96] duration metric: took 6.204698785s to provisionDockerMachine
	I0914 01:07:42.953495  935082 start.go:293] postStartSetup for "ha-401927-m02" (driver="docker")
	I0914 01:07:42.953531  935082 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:07:42.953618  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:07:42.953688  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m02
	I0914 01:07:42.982930  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33629 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m02/id_rsa Username:docker}
	I0914 01:07:43.098852  935082 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:07:43.102732  935082 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 01:07:43.102771  935082 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 01:07:43.102820  935082 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 01:07:43.102831  935082 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 01:07:43.102841  935082 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/addons for local assets ...
	I0914 01:07:43.102899  935082 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/files for local assets ...
	I0914 01:07:43.102979  935082 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem -> 8740792.pem in /etc/ssl/certs
	I0914 01:07:43.102992  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem -> /etc/ssl/certs/8740792.pem
	I0914 01:07:43.103099  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:07:43.119112  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem --> /etc/ssl/certs/8740792.pem (1708 bytes)
	I0914 01:07:43.158808  935082 start.go:296] duration metric: took 205.274568ms for postStartSetup
	I0914 01:07:43.158891  935082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 01:07:43.158932  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m02
	I0914 01:07:43.206904  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33629 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m02/id_rsa Username:docker}
	I0914 01:07:43.309649  935082 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 01:07:43.318468  935082 fix.go:56] duration metric: took 6.940246208s for fixHost
	I0914 01:07:43.318490  935082 start.go:83] releasing machines lock for "ha-401927-m02", held for 6.94029657s
	I0914 01:07:43.318560  935082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927-m02
	I0914 01:07:43.349702  935082 out.go:177] * Found network options:
	I0914 01:07:43.352669  935082 out.go:177]   - NO_PROXY=192.168.49.2
	W0914 01:07:43.355117  935082 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 01:07:43.355155  935082 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 01:07:43.355235  935082 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:07:43.355297  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m02
	I0914 01:07:43.355555  935082 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:07:43.355612  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m02
	I0914 01:07:43.389581  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33629 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m02/id_rsa Username:docker}
	I0914 01:07:43.411956  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33629 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m02/id_rsa Username:docker}
	I0914 01:07:43.714791  935082 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 01:07:43.733759  935082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:07:43.811062  935082 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 01:07:43.811210  935082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:07:43.891543  935082 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 01:07:43.891611  935082 start.go:495] detecting cgroup driver to use...
	I0914 01:07:43.891655  935082 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 01:07:43.891727  935082 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:07:43.958740  935082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:07:44.016272  935082 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:07:44.016389  935082 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:07:44.066888  935082 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:07:44.103957  935082 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:07:44.605942  935082 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:07:44.904219  935082 docker.go:233] disabling docker service ...
	I0914 01:07:44.904301  935082 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:07:44.920999  935082 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:07:44.933505  935082 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:07:45.192965  935082 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:07:45.499625  935082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:07:45.556261  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:07:45.635763  935082 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:07:45.635876  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:45.696319  935082 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:07:45.696442  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:45.744389  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:45.817721  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:45.856850  935082 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:07:45.899479  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:45.933824  935082 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:45.946747  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:07:45.962165  935082 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:07:45.977195  935082 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:07:45.988879  935082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:46.255806  935082 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:07:47.597620  935082 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.34177626s)
	I0914 01:07:47.597666  935082 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:07:47.597728  935082 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:07:47.603966  935082 start.go:563] Will wait 60s for crictl version
	I0914 01:07:47.604078  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:07:47.609863  935082 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:07:47.682788  935082 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 01:07:47.682945  935082 ssh_runner.go:195] Run: crio --version
	I0914 01:07:47.793173  935082 ssh_runner.go:195] Run: crio --version
	I0914 01:07:47.913757  935082 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0914 01:07:47.916964  935082 out.go:177]   - env NO_PROXY=192.168.49.2
	I0914 01:07:47.919734  935082 cli_runner.go:164] Run: docker network inspect ha-401927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 01:07:47.952154  935082 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 01:07:47.957463  935082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:07:47.970792  935082 mustload.go:65] Loading cluster: ha-401927
	I0914 01:07:47.971029  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:47.971300  935082 cli_runner.go:164] Run: docker container inspect ha-401927 --format={{.State.Status}}
	I0914 01:07:48.006093  935082 host.go:66] Checking if "ha-401927" exists ...
	I0914 01:07:48.006395  935082 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927 for IP: 192.168.49.3
	I0914 01:07:48.006404  935082 certs.go:194] generating shared ca certs ...
	I0914 01:07:48.006420  935082 certs.go:226] acquiring lock for ca certs: {Name:mk51aad7f25871620dee3805dbb159a74d927d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:07:48.006543  935082 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key
	I0914 01:07:48.006585  935082 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key
	I0914 01:07:48.006592  935082 certs.go:256] generating profile certs ...
	I0914 01:07:48.006670  935082 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/client.key
	I0914 01:07:48.006745  935082 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.key.2860d730
	I0914 01:07:48.006799  935082 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/proxy-client.key
	I0914 01:07:48.006809  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 01:07:48.006822  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 01:07:48.006833  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 01:07:48.006845  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 01:07:48.006857  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0914 01:07:48.006869  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0914 01:07:48.006880  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0914 01:07:48.006891  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0914 01:07:48.006944  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079.pem (1338 bytes)
	W0914 01:07:48.006974  935082 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079_empty.pem, impossibly tiny 0 bytes
	I0914 01:07:48.006984  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 01:07:48.007009  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:07:48.007032  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:07:48.007053  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem (1679 bytes)
	I0914 01:07:48.007097  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem (1708 bytes)
	I0914 01:07:48.007127  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem -> /usr/share/ca-certificates/8740792.pem
	I0914 01:07:48.007143  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:07:48.007155  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079.pem -> /usr/share/ca-certificates/874079.pem
	I0914 01:07:48.007215  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:07:48.067734  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33624 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927/id_rsa Username:docker}
	I0914 01:07:48.153621  935082 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0914 01:07:48.160660  935082 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0914 01:07:48.173796  935082 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0914 01:07:48.177747  935082 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0914 01:07:48.192975  935082 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0914 01:07:48.196974  935082 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0914 01:07:48.209011  935082 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0914 01:07:48.212785  935082 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0914 01:07:48.225719  935082 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0914 01:07:48.230471  935082 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0914 01:07:48.242930  935082 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0914 01:07:48.246851  935082 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0914 01:07:48.263240  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:07:48.288683  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 01:07:48.315224  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:07:48.358780  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:07:48.400817  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:07:48.434587  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:07:48.473270  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:07:48.511826  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 01:07:48.548029  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem --> /usr/share/ca-certificates/8740792.pem (1708 bytes)
	I0914 01:07:48.588280  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:07:48.624299  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079.pem --> /usr/share/ca-certificates/874079.pem (1338 bytes)
	I0914 01:07:48.659894  935082 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0914 01:07:48.690560  935082 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0914 01:07:48.716004  935082 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0914 01:07:48.740632  935082 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0914 01:07:48.765735  935082 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0914 01:07:48.806620  935082 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0914 01:07:48.832014  935082 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0914 01:07:48.860146  935082 ssh_runner.go:195] Run: openssl version
	I0914 01:07:48.868736  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:07:48.887093  935082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:07:48.891805  935082 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 00:35 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:07:48.891883  935082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:07:48.900103  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:07:48.912467  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/874079.pem && ln -fs /usr/share/ca-certificates/874079.pem /etc/ssl/certs/874079.pem"
	I0914 01:07:48.925181  935082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/874079.pem
	I0914 01:07:48.930433  935082 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 00:55 /usr/share/ca-certificates/874079.pem
	I0914 01:07:48.930511  935082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/874079.pem
	I0914 01:07:48.939949  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/874079.pem /etc/ssl/certs/51391683.0"
	I0914 01:07:48.952085  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8740792.pem && ln -fs /usr/share/ca-certificates/8740792.pem /etc/ssl/certs/8740792.pem"
	I0914 01:07:48.964595  935082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8740792.pem
	I0914 01:07:48.969498  935082 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 00:55 /usr/share/ca-certificates/8740792.pem
	I0914 01:07:48.969582  935082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8740792.pem
	I0914 01:07:48.978279  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8740792.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:07:48.994364  935082 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:07:48.999621  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:07:49.008955  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:07:49.023000  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:07:49.032298  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:07:49.043605  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:07:49.051658  935082 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:07:49.063468  935082 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I0914 01:07:49.063578  935082 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-401927-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-401927 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:07:49.063610  935082 kube-vip.go:115] generating kube-vip config ...
	I0914 01:07:49.063679  935082 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0914 01:07:49.091651  935082 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0914 01:07:49.091728  935082 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0914 01:07:49.091828  935082 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:07:49.111601  935082 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:07:49.111689  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0914 01:07:49.123258  935082 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0914 01:07:49.145051  935082 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:07:49.165548  935082 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0914 01:07:49.186848  935082 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0914 01:07:49.190640  935082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:07:49.202582  935082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:49.327154  935082 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.341428  935082 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0914 01:07:49.341714  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:49.346677  935082 out.go:177] * Verifying Kubernetes components...
	I0914 01:07:49.349180  935082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:07:49.469782  935082 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:07:49.483093  935082 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 01:07:49.483356  935082 kapi.go:59] client config for ha-401927: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/client.key", CAFile:"/home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1e6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0914 01:07:49.483417  935082 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0914 01:07:49.483624  935082 node_ready.go:35] waiting up to 6m0s for node "ha-401927-m02" to be "Ready" ...
	I0914 01:07:49.483741  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:07:49.483752  935082 round_trippers.go:469] Request Headers:
	I0914 01:07:49.483761  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:07:49.483770  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:00.330675  935082 round_trippers.go:574] Response Status: 500 Internal Server Error in 10846 milliseconds
	I0914 01:08:00.331193  935082 node_ready.go:53] error getting node "ha-401927-m02": etcdserver: request timed out
	I0914 01:08:00.331265  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:08:00.331272  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:00.331280  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:00.331285  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:04.308392  935082 round_trippers.go:574] Response Status: 200 OK in 3977 milliseconds
	I0914 01:08:04.309785  935082 node_ready.go:49] node "ha-401927-m02" has status "Ready":"True"
	I0914 01:08:04.309807  935082 node_ready.go:38] duration metric: took 14.82615786s for node "ha-401927-m02" to be "Ready" ...
	I0914 01:08:04.309818  935082 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:08:04.309859  935082 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0914 01:08:04.309870  935082 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0914 01:08:04.309930  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 01:08:04.309935  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:04.309943  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:04.309948  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:04.397492  935082 round_trippers.go:574] Response Status: 429 Too Many Requests in 87 milliseconds
	I0914 01:08:05.398515  935082 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 01:08:05.398565  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 01:08:05.398571  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.398580  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.398585  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.409053  935082 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0914 01:08:05.422346  935082 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:05.422521  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:08:05.422546  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.422570  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.422589  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.427540  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:08:05.428344  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:05.428357  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.428365  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.428369  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.438969  935082 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0914 01:08:05.439454  935082 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:05.439464  935082 pod_ready.go:82] duration metric: took 17.048706ms for pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:05.439474  935082 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zrv9t" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:05.439535  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zrv9t
	I0914 01:08:05.439540  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.439547  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.439551  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.444654  935082 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 01:08:05.445334  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:05.445344  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.445352  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.445355  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.448794  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:05.449472  935082 pod_ready.go:93] pod "coredns-7c65d6cfc9-zrv9t" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:05.449522  935082 pod_ready.go:82] duration metric: took 10.040642ms for pod "coredns-7c65d6cfc9-zrv9t" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:05.449548  935082 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:05.449632  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401927
	I0914 01:08:05.449656  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.449687  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.449705  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.453493  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:05.454119  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:05.454164  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.454184  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.454203  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.457564  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:05.458140  935082 pod_ready.go:93] pod "etcd-ha-401927" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:05.458176  935082 pod_ready.go:82] duration metric: took 8.608076ms for pod "etcd-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:05.458201  935082 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:05.458287  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401927-m02
	I0914 01:08:05.458320  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.458343  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.458363  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.465154  935082 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 01:08:05.465858  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:08:05.465904  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.465927  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.465946  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.470274  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:08:05.470885  935082 pod_ready.go:93] pod "etcd-ha-401927-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:05.470922  935082 pod_ready.go:82] duration metric: took 12.689621ms for pod "etcd-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:05.470960  935082 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:05.471054  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401927-m03
	I0914 01:08:05.471077  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.471111  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.471132  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.475142  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:05.599034  935082 request.go:632] Waited for 123.207496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m03
	I0914 01:08:05.599144  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m03
	I0914 01:08:05.599202  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.599237  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.599255  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.609037  935082 round_trippers.go:574] Response Status: 404 Not Found in 9 milliseconds
	I0914 01:08:05.609228  935082 pod_ready.go:98] node "ha-401927-m03" hosting pod "etcd-ha-401927-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-401927-m03": nodes "ha-401927-m03" not found
	I0914 01:08:05.609274  935082 pod_ready.go:82] duration metric: took 138.289914ms for pod "etcd-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	E0914 01:08:05.609297  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927-m03" hosting pod "etcd-ha-401927-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-401927-m03": nodes "ha-401927-m03" not found
	I0914 01:08:05.609341  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:05.798593  935082 request.go:632] Waited for 189.161791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927
	I0914 01:08:05.798709  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927
	I0914 01:08:05.798743  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.798777  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.798798  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:05.802612  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:05.998964  935082 request.go:632] Waited for 195.341034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:05.999044  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:05.999055  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:05.999102  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:05.999110  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:06.011161  935082 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0914 01:08:06.012281  935082 pod_ready.go:93] pod "kube-apiserver-ha-401927" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:06.012306  935082 pod_ready.go:82] duration metric: took 402.938216ms for pod "kube-apiserver-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:06.012331  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:06.199299  935082 request.go:632] Waited for 186.889148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927-m02
	I0914 01:08:06.199375  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927-m02
	I0914 01:08:06.199388  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:06.200079  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:06.200087  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:06.206837  935082 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 01:08:06.399354  935082 request.go:632] Waited for 191.254435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:08:06.399426  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:08:06.399436  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:06.399445  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:06.399453  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:06.403303  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:06.404339  935082 pod_ready.go:93] pod "kube-apiserver-ha-401927-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:06.404368  935082 pod_ready.go:82] duration metric: took 392.025316ms for pod "kube-apiserver-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:06.404385  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:06.599385  935082 request.go:632] Waited for 194.887991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927-m03
	I0914 01:08:06.599462  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927-m03
	I0914 01:08:06.599473  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:06.599494  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:06.599501  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:06.602532  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:08:06.799073  935082 request.go:632] Waited for 195.286537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m03
	I0914 01:08:06.799192  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m03
	I0914 01:08:06.799228  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:06.799254  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:06.799274  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:06.802090  935082 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0914 01:08:06.802276  935082 pod_ready.go:98] node "ha-401927-m03" hosting pod "kube-apiserver-ha-401927-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-401927-m03": nodes "ha-401927-m03" not found
	I0914 01:08:06.802308  935082 pod_ready.go:82] duration metric: took 397.915384ms for pod "kube-apiserver-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	E0914 01:08:06.802330  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927-m03" hosting pod "kube-apiserver-ha-401927-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-401927-m03": nodes "ha-401927-m03" not found
	I0914 01:08:06.802378  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:06.998611  935082 request.go:632] Waited for 196.110633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927
	I0914 01:08:06.998689  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927
	I0914 01:08:06.998701  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:06.998710  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:06.998714  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:07.001855  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:07.199210  935082 request.go:632] Waited for 196.306132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:07.199276  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:07.199282  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:07.199291  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:07.199295  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:07.202201  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:08:07.202851  935082 pod_ready.go:93] pod "kube-controller-manager-ha-401927" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:07.202871  935082 pod_ready.go:82] duration metric: took 400.467948ms for pod "kube-controller-manager-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:07.202882  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:07.399305  935082 request.go:632] Waited for 196.357142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927-m02
	I0914 01:08:07.399483  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927-m02
	I0914 01:08:07.399500  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:07.399509  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:07.399515  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:07.402752  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:07.599361  935082 request.go:632] Waited for 195.269907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:08:07.599465  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:08:07.599487  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:07.599551  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:07.599568  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:07.606585  935082 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0914 01:08:07.607647  935082 pod_ready.go:93] pod "kube-controller-manager-ha-401927-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:07.607721  935082 pod_ready.go:82] duration metric: took 404.824867ms for pod "kube-controller-manager-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:07.607749  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:07.798660  935082 request.go:632] Waited for 190.769688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927-m03
	I0914 01:08:07.798769  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927-m03
	I0914 01:08:07.798791  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:07.798829  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:07.798847  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:07.803120  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:08:07.998699  935082 request.go:632] Waited for 194.1611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m03
	I0914 01:08:07.998826  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m03
	I0914 01:08:07.998861  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:07.998889  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:07.998909  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:08.001892  935082 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0914 01:08:08.002378  935082 pod_ready.go:98] node "ha-401927-m03" hosting pod "kube-controller-manager-ha-401927-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-401927-m03": nodes "ha-401927-m03" not found
	I0914 01:08:08.002440  935082 pod_ready.go:82] duration metric: took 394.658928ms for pod "kube-controller-manager-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	E0914 01:08:08.002465  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927-m03" hosting pod "kube-controller-manager-ha-401927-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-401927-m03": nodes "ha-401927-m03" not found
	I0914 01:08:08.002503  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bx82b" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:08.199184  935082 request.go:632] Waited for 196.514981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bx82b
	I0914 01:08:08.199264  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bx82b
	I0914 01:08:08.199270  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:08.199279  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:08.199284  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:08.202768  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:08.399429  935082 request.go:632] Waited for 195.239769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:08:08.399549  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:08:08.399585  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:08.399613  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:08.399632  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:08.402731  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:08.403785  935082 pod_ready.go:93] pod "kube-proxy-bx82b" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:08.403832  935082 pod_ready.go:82] duration metric: took 401.300298ms for pod "kube-proxy-bx82b" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:08.403873  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dh9sg" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:08.598752  935082 request.go:632] Waited for 194.796325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dh9sg
	I0914 01:08:08.598938  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dh9sg
	I0914 01:08:08.598966  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:08.598988  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:08.599017  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:08.602561  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:08.799233  935082 request.go:632] Waited for 195.314517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:08.799340  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:08.799361  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:08.799397  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:08.799419  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:08.802258  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:08:08.803484  935082 pod_ready.go:93] pod "kube-proxy-dh9sg" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:08.803543  935082 pod_ready.go:82] duration metric: took 399.645133ms for pod "kube-proxy-dh9sg" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:08.803586  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mbxw6" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:08.999520  935082 request.go:632] Waited for 195.845055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbxw6
	I0914 01:08:08.999600  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbxw6
	I0914 01:08:08.999620  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:08.999635  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:08.999641  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:09.003771  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:08:09.199308  935082 request.go:632] Waited for 194.266262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m03
	I0914 01:08:09.199398  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m03
	I0914 01:08:09.199410  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:09.199420  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:09.199436  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:09.202574  935082 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0914 01:08:09.202836  935082 pod_ready.go:98] node "ha-401927-m03" hosting pod "kube-proxy-mbxw6" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-401927-m03": nodes "ha-401927-m03" not found
	I0914 01:08:09.202873  935082 pod_ready.go:82] duration metric: took 399.262717ms for pod "kube-proxy-mbxw6" in "kube-system" namespace to be "Ready" ...
	E0914 01:08:09.202890  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927-m03" hosting pod "kube-proxy-mbxw6" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-401927-m03": nodes "ha-401927-m03" not found
	I0914 01:08:09.202898  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vb5lf" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:09.399228  935082 request.go:632] Waited for 196.219299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vb5lf
	I0914 01:08:09.399312  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vb5lf
	I0914 01:08:09.399341  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:09.399352  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:09.399356  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:09.403046  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:09.599307  935082 request.go:632] Waited for 195.128765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:08:09.599432  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:08:09.599482  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:09.599510  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:09.599529  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:09.602779  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:08:09.603905  935082 pod_ready.go:93] pod "kube-proxy-vb5lf" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:09.603961  935082 pod_ready.go:82] duration metric: took 401.049235ms for pod "kube-proxy-vb5lf" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:09.603987  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:09.798879  935082 request.go:632] Waited for 194.800264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927
	I0914 01:08:09.798962  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927
	I0914 01:08:09.798973  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:09.798981  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:09.798986  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:09.801550  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:08:09.998698  935082 request.go:632] Waited for 196.264755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:09.998807  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:08:09.998818  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:09.998827  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:09.998842  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:10.006149  935082 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 01:08:10.006794  935082 pod_ready.go:93] pod "kube-scheduler-ha-401927" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:10.006821  935082 pod_ready.go:82] duration metric: took 402.813288ms for pod "kube-scheduler-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:10.006834  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:10.198772  935082 request.go:632] Waited for 191.86412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927-m02
	I0914 01:08:10.198844  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927-m02
	I0914 01:08:10.198852  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:10.198861  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:10.198866  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:10.201593  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:08:10.399466  935082 request.go:632] Waited for 197.343211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:08:10.399526  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:08:10.399533  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:10.399541  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:10.399547  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:10.404027  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:08:10.404870  935082 pod_ready.go:93] pod "kube-scheduler-ha-401927-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 01:08:10.404891  935082 pod_ready.go:82] duration metric: took 398.049051ms for pod "kube-scheduler-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:10.404901  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	I0914 01:08:10.599193  935082 request.go:632] Waited for 194.198925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927-m03
	I0914 01:08:10.599268  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927-m03
	I0914 01:08:10.599284  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:10.599293  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:10.599299  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:10.607322  935082 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 01:08:10.799185  935082 request.go:632] Waited for 191.196459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m03
	I0914 01:08:10.799244  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m03
	I0914 01:08:10.799253  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:10.799265  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:10.799272  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:10.803105  935082 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0914 01:08:10.803246  935082 pod_ready.go:98] node "ha-401927-m03" hosting pod "kube-scheduler-ha-401927-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-401927-m03": nodes "ha-401927-m03" not found
	I0914 01:08:10.803263  935082 pod_ready.go:82] duration metric: took 398.353635ms for pod "kube-scheduler-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	E0914 01:08:10.803273  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927-m03" hosting pod "kube-scheduler-ha-401927-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-401927-m03": nodes "ha-401927-m03" not found
	I0914 01:08:10.803284  935082 pod_ready.go:39] duration metric: took 6.493455916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:08:10.803322  935082 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:08:10.803391  935082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:08:10.830786  935082 api_server.go:72] duration metric: took 21.489305322s to wait for apiserver process to appear ...
	I0914 01:08:10.830813  935082 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:08:10.830835  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:10.843329  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:10.843365  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:11.331032  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:11.339446  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:11.339473  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:11.830941  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:11.842957  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:11.842989  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:12.331648  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:12.342474  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:12.342499  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:12.831034  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:12.842230  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:12.842259  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:13.331938  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:13.340184  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:13.340218  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:13.831892  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:13.841302  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:13.841357  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:14.331907  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:14.340211  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:14.340241  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:14.831928  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:14.839584  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:14.839626  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:15.331195  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:15.339337  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:15.339429  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:15.831872  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:15.843795  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:15.843869  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:16.331690  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:16.344084  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:16.344120  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:16.831690  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:16.839348  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:16.839378  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:17.330889  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:17.338482  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:17.338509  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:17.831751  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:17.839567  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:17.839608  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:18.331203  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:18.339007  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:18.339042  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:18.831596  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:18.839517  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:18.839542  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:19.330946  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:19.340567  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:19.340594  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:19.830987  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:19.844649  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:19.844729  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:20.330952  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:20.343932  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:20.344015  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:20.831508  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:20.839869  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:20.839960  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:21.331572  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:21.339745  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:21.339776  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:21.830898  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:21.842254  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:21.842281  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:22.331882  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:22.340909  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:22.340940  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:22.831519  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:22.839286  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:22.839315  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:23.330986  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:23.339316  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:23.339358  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:23.831639  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:23.840131  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:23.840159  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:24.331669  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:24.339512  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:24.339541  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:24.830972  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:24.844558  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:24.844590  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:25.330991  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:25.338673  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:25.338709  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:25.831196  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:25.839180  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:25.839210  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:26.331649  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:26.339229  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:26.339256  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:26.831470  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:26.839260  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:26.839287  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:27.331914  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:27.339664  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:27.339698  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:27.831199  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:27.839216  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:27.839248  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:28.331181  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:28.339500  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:28.339528  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:28.830973  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:28.844663  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:28.844701  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:29.330977  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:29.339016  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:29.339056  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:29.831715  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:29.844213  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:29.844299  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:30.331339  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:30.339709  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:30.339744  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:30.830959  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:30.854366  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:30.854393  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:31.330921  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:31.339155  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:31.339184  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:31.831777  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:31.839666  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:31.839698  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:32.330968  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:32.338605  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:32.338633  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:32.831193  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:32.853154  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:32.853182  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:33.331215  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:33.342908  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:33.342934  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:33.831632  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:33.848621  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:33.848651  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:34.331070  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:34.339486  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:34.339517  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:34.831114  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:34.843092  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:34.843124  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:35.331784  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:35.339530  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:35.339555  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:35.831123  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:35.840333  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:35.840416  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:36.331199  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:36.338889  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:36.338915  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:36.831571  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:36.839683  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:36.839709  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:37.330996  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:37.338985  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:37.339019  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:37.831610  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:37.843503  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:37.843531  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:38.331618  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:38.339181  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:38.339206  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:38.831782  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:38.839776  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:38.839807  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:39.331317  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:39.338971  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:39.339003  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:39.831604  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:39.840491  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:39.840536  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:40.330988  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:40.338915  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:40.338944  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:40.831783  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:40.842706  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:40.842736  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:41.331059  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:41.339486  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:41.339518  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:41.830958  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:41.839123  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:41.839159  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:42.331873  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:42.340006  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:42.340036  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:42.831612  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:42.839541  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:42.839572  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:43.331049  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:43.340496  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:43.340528  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:43.831023  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:43.838958  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:43.838988  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:44.331668  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:44.340638  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:44.340671  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:44.831117  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:44.842310  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:44.842344  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:45.331944  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:45.340753  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:45.340783  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:45.831221  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:45.839338  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:45.839366  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:46.330947  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:46.338811  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:46.338839  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:46.831494  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:46.843819  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:46.843848  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:47.331361  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:47.339134  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:47.339156  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:47.831479  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:47.846590  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:47.846617  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:48.331739  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:48.339442  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0914 01:08:48.339504  935082 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0914 01:08:48.831048  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:49.066112  935082 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": read tcp 192.168.49.1:49916->192.168.49.2:8443: read: connection reset by peer
	I0914 01:08:49.331539  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:49.331984  935082 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0914 01:08:49.831666  935082 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:49.831753  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:49.890723  935082 cri.go:89] found id: "94c318733bd78e6c67352b721db2407e5f31e3e7605e6cb161e43e0bbd5a0e41"
	I0914 01:08:49.890747  935082 cri.go:89] found id: "53520ade2de5372955f21dfacc389a61de951119ceefafc2d0674bf97df42fe0"
	I0914 01:08:49.890754  935082 cri.go:89] found id: ""
	I0914 01:08:49.890761  935082 logs.go:276] 2 containers: [94c318733bd78e6c67352b721db2407e5f31e3e7605e6cb161e43e0bbd5a0e41 53520ade2de5372955f21dfacc389a61de951119ceefafc2d0674bf97df42fe0]
	I0914 01:08:49.890817  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:49.895340  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:49.898841  935082 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:49.898906  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:49.947601  935082 cri.go:89] found id: "3a0eab5c98560bfc340da7d7b1ae66ab0730c476fac8007cb9da6dd504f20635"
	I0914 01:08:49.947632  935082 cri.go:89] found id: "05300c676cdbb56a371e7293896043af78e566d569b5ff4f7bf13765715b0d96"
	I0914 01:08:49.947637  935082 cri.go:89] found id: ""
	I0914 01:08:49.947648  935082 logs.go:276] 2 containers: [3a0eab5c98560bfc340da7d7b1ae66ab0730c476fac8007cb9da6dd504f20635 05300c676cdbb56a371e7293896043af78e566d569b5ff4f7bf13765715b0d96]
	I0914 01:08:49.947702  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:49.951757  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:49.955191  935082 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:49.955257  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:50.017729  935082 cri.go:89] found id: ""
	I0914 01:08:50.017757  935082 logs.go:276] 0 containers: []
	W0914 01:08:50.017767  935082 logs.go:278] No container was found matching "coredns"
	I0914 01:08:50.017775  935082 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:50.017842  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:50.084774  935082 cri.go:89] found id: "5e6d54250ac646298303e82440112f4ffd901d35589ff126e841db17438d5259"
	I0914 01:08:50.084801  935082 cri.go:89] found id: "9d00a43740d0da8687c949721650d191793b3ea450bc37cada42bdc367bda929"
	I0914 01:08:50.084807  935082 cri.go:89] found id: ""
	I0914 01:08:50.084814  935082 logs.go:276] 2 containers: [5e6d54250ac646298303e82440112f4ffd901d35589ff126e841db17438d5259 9d00a43740d0da8687c949721650d191793b3ea450bc37cada42bdc367bda929]
	I0914 01:08:50.084899  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:50.089138  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:50.095905  935082 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:50.095980  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:50.148830  935082 cri.go:89] found id: "d13bf7d29fc651152bf67c7d8dd851173d5c3596ea06a76af8e0f41f70ef31d2"
	I0914 01:08:50.148856  935082 cri.go:89] found id: ""
	I0914 01:08:50.148865  935082 logs.go:276] 1 containers: [d13bf7d29fc651152bf67c7d8dd851173d5c3596ea06a76af8e0f41f70ef31d2]
	I0914 01:08:50.148925  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:50.157240  935082 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:50.157346  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:50.210189  935082 cri.go:89] found id: "d46d64e1005013b2cbe00099902d9c65126b17149ee28b33f8cdd1f4a0ecb90f"
	I0914 01:08:50.210213  935082 cri.go:89] found id: "0526496f4020539f724985b818887b6e8dcb5ac273192a155b817d44c7262180"
	I0914 01:08:50.210218  935082 cri.go:89] found id: ""
	I0914 01:08:50.210225  935082 logs.go:276] 2 containers: [d46d64e1005013b2cbe00099902d9c65126b17149ee28b33f8cdd1f4a0ecb90f 0526496f4020539f724985b818887b6e8dcb5ac273192a155b817d44c7262180]
	I0914 01:08:50.210283  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:50.213966  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:50.217274  935082 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:50.217340  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:50.264766  935082 cri.go:89] found id: "9ef75ae2efe118d574f7065bf12cf3ebefbd42e6119d914c3cce6ac62053dacc"
	I0914 01:08:50.264837  935082 cri.go:89] found id: ""
	I0914 01:08:50.264871  935082 logs.go:276] 1 containers: [9ef75ae2efe118d574f7065bf12cf3ebefbd42e6119d914c3cce6ac62053dacc]
	I0914 01:08:50.264960  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:50.269151  935082 logs.go:123] Gathering logs for kindnet [9ef75ae2efe118d574f7065bf12cf3ebefbd42e6119d914c3cce6ac62053dacc] ...
	I0914 01:08:50.269228  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef75ae2efe118d574f7065bf12cf3ebefbd42e6119d914c3cce6ac62053dacc"
	I0914 01:08:50.316728  935082 logs.go:123] Gathering logs for container status ...
	I0914 01:08:50.316803  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:50.385143  935082 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:50.385220  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:50.464895  935082 logs.go:123] Gathering logs for etcd [3a0eab5c98560bfc340da7d7b1ae66ab0730c476fac8007cb9da6dd504f20635] ...
	I0914 01:08:50.464974  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a0eab5c98560bfc340da7d7b1ae66ab0730c476fac8007cb9da6dd504f20635"
	I0914 01:08:50.536741  935082 logs.go:123] Gathering logs for kube-proxy [d13bf7d29fc651152bf67c7d8dd851173d5c3596ea06a76af8e0f41f70ef31d2] ...
	I0914 01:08:50.536816  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d13bf7d29fc651152bf67c7d8dd851173d5c3596ea06a76af8e0f41f70ef31d2"
	I0914 01:08:50.590993  935082 logs.go:123] Gathering logs for kube-apiserver [94c318733bd78e6c67352b721db2407e5f31e3e7605e6cb161e43e0bbd5a0e41] ...
	I0914 01:08:50.591020  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c318733bd78e6c67352b721db2407e5f31e3e7605e6cb161e43e0bbd5a0e41"
	I0914 01:08:50.675318  935082 logs.go:123] Gathering logs for kube-apiserver [53520ade2de5372955f21dfacc389a61de951119ceefafc2d0674bf97df42fe0] ...
	I0914 01:08:50.675394  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53520ade2de5372955f21dfacc389a61de951119ceefafc2d0674bf97df42fe0"
	I0914 01:08:50.737012  935082 logs.go:123] Gathering logs for kube-controller-manager [d46d64e1005013b2cbe00099902d9c65126b17149ee28b33f8cdd1f4a0ecb90f] ...
	I0914 01:08:50.737037  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d46d64e1005013b2cbe00099902d9c65126b17149ee28b33f8cdd1f4a0ecb90f"
	I0914 01:08:50.801078  935082 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:50.801151  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:50.876567  935082 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:50.876648  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:51.277220  935082 logs.go:123] Gathering logs for kube-scheduler [9d00a43740d0da8687c949721650d191793b3ea450bc37cada42bdc367bda929] ...
	I0914 01:08:51.277438  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d00a43740d0da8687c949721650d191793b3ea450bc37cada42bdc367bda929"
	I0914 01:08:51.334817  935082 logs.go:123] Gathering logs for kube-controller-manager [0526496f4020539f724985b818887b6e8dcb5ac273192a155b817d44c7262180] ...
	I0914 01:08:51.334895  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0526496f4020539f724985b818887b6e8dcb5ac273192a155b817d44c7262180"
	I0914 01:08:51.396627  935082 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:51.396655  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:51.415827  935082 logs.go:123] Gathering logs for etcd [05300c676cdbb56a371e7293896043af78e566d569b5ff4f7bf13765715b0d96] ...
	I0914 01:08:51.415853  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05300c676cdbb56a371e7293896043af78e566d569b5ff4f7bf13765715b0d96"
	I0914 01:08:51.480401  935082 logs.go:123] Gathering logs for kube-scheduler [5e6d54250ac646298303e82440112f4ffd901d35589ff126e841db17438d5259] ...
	I0914 01:08:51.480486  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e6d54250ac646298303e82440112f4ffd901d35589ff126e841db17438d5259"
	I0914 01:08:54.078172  935082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 01:08:54.086345  935082 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 01:08:54.086472  935082 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0914 01:08:54.086530  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:54.086559  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:54.086578  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:54.100032  935082 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0914 01:08:54.100356  935082 api_server.go:141] control plane version: v1.31.1
	I0914 01:08:54.100407  935082 api_server.go:131] duration metric: took 43.269585502s to wait for apiserver health ...
	I0914 01:08:54.100432  935082 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 01:08:54.100478  935082 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:08:54.100553  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:08:54.148679  935082 cri.go:89] found id: "94c318733bd78e6c67352b721db2407e5f31e3e7605e6cb161e43e0bbd5a0e41"
	I0914 01:08:54.148704  935082 cri.go:89] found id: "53520ade2de5372955f21dfacc389a61de951119ceefafc2d0674bf97df42fe0"
	I0914 01:08:54.148709  935082 cri.go:89] found id: ""
	I0914 01:08:54.148716  935082 logs.go:276] 2 containers: [94c318733bd78e6c67352b721db2407e5f31e3e7605e6cb161e43e0bbd5a0e41 53520ade2de5372955f21dfacc389a61de951119ceefafc2d0674bf97df42fe0]
	I0914 01:08:54.148770  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:54.152290  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:54.155628  935082 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0914 01:08:54.155696  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:08:54.194668  935082 cri.go:89] found id: "3a0eab5c98560bfc340da7d7b1ae66ab0730c476fac8007cb9da6dd504f20635"
	I0914 01:08:54.194690  935082 cri.go:89] found id: "05300c676cdbb56a371e7293896043af78e566d569b5ff4f7bf13765715b0d96"
	I0914 01:08:54.194695  935082 cri.go:89] found id: ""
	I0914 01:08:54.194702  935082 logs.go:276] 2 containers: [3a0eab5c98560bfc340da7d7b1ae66ab0730c476fac8007cb9da6dd504f20635 05300c676cdbb56a371e7293896043af78e566d569b5ff4f7bf13765715b0d96]
	I0914 01:08:54.194759  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:54.198721  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:54.202195  935082 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0914 01:08:54.202268  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:08:54.246494  935082 cri.go:89] found id: ""
	I0914 01:08:54.246523  935082 logs.go:276] 0 containers: []
	W0914 01:08:54.246533  935082 logs.go:278] No container was found matching "coredns"
	I0914 01:08:54.246540  935082 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:08:54.246596  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:08:54.286099  935082 cri.go:89] found id: "5e6d54250ac646298303e82440112f4ffd901d35589ff126e841db17438d5259"
	I0914 01:08:54.286119  935082 cri.go:89] found id: "9d00a43740d0da8687c949721650d191793b3ea450bc37cada42bdc367bda929"
	I0914 01:08:54.286124  935082 cri.go:89] found id: ""
	I0914 01:08:54.286131  935082 logs.go:276] 2 containers: [5e6d54250ac646298303e82440112f4ffd901d35589ff126e841db17438d5259 9d00a43740d0da8687c949721650d191793b3ea450bc37cada42bdc367bda929]
	I0914 01:08:54.286186  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:54.289968  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:54.293405  935082 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:08:54.293503  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:08:54.331322  935082 cri.go:89] found id: "d13bf7d29fc651152bf67c7d8dd851173d5c3596ea06a76af8e0f41f70ef31d2"
	I0914 01:08:54.331389  935082 cri.go:89] found id: ""
	I0914 01:08:54.331404  935082 logs.go:276] 1 containers: [d13bf7d29fc651152bf67c7d8dd851173d5c3596ea06a76af8e0f41f70ef31d2]
	I0914 01:08:54.331470  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:54.335189  935082 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:08:54.335259  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:08:54.370906  935082 cri.go:89] found id: "d46d64e1005013b2cbe00099902d9c65126b17149ee28b33f8cdd1f4a0ecb90f"
	I0914 01:08:54.370929  935082 cri.go:89] found id: "0526496f4020539f724985b818887b6e8dcb5ac273192a155b817d44c7262180"
	I0914 01:08:54.370935  935082 cri.go:89] found id: ""
	I0914 01:08:54.370942  935082 logs.go:276] 2 containers: [d46d64e1005013b2cbe00099902d9c65126b17149ee28b33f8cdd1f4a0ecb90f 0526496f4020539f724985b818887b6e8dcb5ac273192a155b817d44c7262180]
	I0914 01:08:54.371014  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:54.374728  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:54.377873  935082 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0914 01:08:54.377945  935082 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:08:54.423331  935082 cri.go:89] found id: "9ef75ae2efe118d574f7065bf12cf3ebefbd42e6119d914c3cce6ac62053dacc"
	I0914 01:08:54.423354  935082 cri.go:89] found id: ""
	I0914 01:08:54.423361  935082 logs.go:276] 1 containers: [9ef75ae2efe118d574f7065bf12cf3ebefbd42e6119d914c3cce6ac62053dacc]
	I0914 01:08:54.423416  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:08:54.427184  935082 logs.go:123] Gathering logs for dmesg ...
	I0914 01:08:54.427209  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:08:54.444088  935082 logs.go:123] Gathering logs for kube-scheduler [5e6d54250ac646298303e82440112f4ffd901d35589ff126e841db17438d5259] ...
	I0914 01:08:54.444118  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e6d54250ac646298303e82440112f4ffd901d35589ff126e841db17438d5259"
	I0914 01:08:54.506364  935082 logs.go:123] Gathering logs for kube-controller-manager [d46d64e1005013b2cbe00099902d9c65126b17149ee28b33f8cdd1f4a0ecb90f] ...
	I0914 01:08:54.506398  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d46d64e1005013b2cbe00099902d9c65126b17149ee28b33f8cdd1f4a0ecb90f"
	I0914 01:08:54.564533  935082 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:08:54.564565  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:08:54.847629  935082 logs.go:123] Gathering logs for kube-proxy [d13bf7d29fc651152bf67c7d8dd851173d5c3596ea06a76af8e0f41f70ef31d2] ...
	I0914 01:08:54.847663  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d13bf7d29fc651152bf67c7d8dd851173d5c3596ea06a76af8e0f41f70ef31d2"
	I0914 01:08:54.900527  935082 logs.go:123] Gathering logs for kube-controller-manager [0526496f4020539f724985b818887b6e8dcb5ac273192a155b817d44c7262180] ...
	I0914 01:08:54.900553  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0526496f4020539f724985b818887b6e8dcb5ac273192a155b817d44c7262180"
	I0914 01:08:54.936218  935082 logs.go:123] Gathering logs for kindnet [9ef75ae2efe118d574f7065bf12cf3ebefbd42e6119d914c3cce6ac62053dacc] ...
	I0914 01:08:54.936301  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ef75ae2efe118d574f7065bf12cf3ebefbd42e6119d914c3cce6ac62053dacc"
	I0914 01:08:54.979610  935082 logs.go:123] Gathering logs for CRI-O ...
	I0914 01:08:54.979639  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0914 01:08:55.044095  935082 logs.go:123] Gathering logs for kubelet ...
	I0914 01:08:55.044170  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0914 01:08:55.123540  935082 logs.go:123] Gathering logs for kube-apiserver [94c318733bd78e6c67352b721db2407e5f31e3e7605e6cb161e43e0bbd5a0e41] ...
	I0914 01:08:55.123579  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c318733bd78e6c67352b721db2407e5f31e3e7605e6cb161e43e0bbd5a0e41"
	I0914 01:08:55.169155  935082 logs.go:123] Gathering logs for kube-apiserver [53520ade2de5372955f21dfacc389a61de951119ceefafc2d0674bf97df42fe0] ...
	I0914 01:08:55.169187  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53520ade2de5372955f21dfacc389a61de951119ceefafc2d0674bf97df42fe0"
	I0914 01:08:55.213280  935082 logs.go:123] Gathering logs for etcd [3a0eab5c98560bfc340da7d7b1ae66ab0730c476fac8007cb9da6dd504f20635] ...
	I0914 01:08:55.213310  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a0eab5c98560bfc340da7d7b1ae66ab0730c476fac8007cb9da6dd504f20635"
	I0914 01:08:55.272589  935082 logs.go:123] Gathering logs for kube-scheduler [9d00a43740d0da8687c949721650d191793b3ea450bc37cada42bdc367bda929] ...
	I0914 01:08:55.272621  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d00a43740d0da8687c949721650d191793b3ea450bc37cada42bdc367bda929"
	I0914 01:08:55.308724  935082 logs.go:123] Gathering logs for etcd [05300c676cdbb56a371e7293896043af78e566d569b5ff4f7bf13765715b0d96] ...
	I0914 01:08:55.308759  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05300c676cdbb56a371e7293896043af78e566d569b5ff4f7bf13765715b0d96"
	I0914 01:08:55.358967  935082 logs.go:123] Gathering logs for container status ...
	I0914 01:08:55.358998  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:08:57.902645  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 01:08:57.902669  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:57.902679  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:57.902684  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:57.910750  935082 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0914 01:08:57.922125  935082 system_pods.go:59] 26 kube-system pods found
	I0914 01:08:57.922213  935082 system_pods.go:61] "coredns-7c65d6cfc9-ghkt8" [b2f4b69f-d357-4070-b350-4c0b724d8d16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:08:57.922247  935082 system_pods.go:61] "coredns-7c65d6cfc9-zrv9t" [bfa997e2-9cf0-4e8f-8f3e-eb24773c7288] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:08:57.922285  935082 system_pods.go:61] "etcd-ha-401927" [7edf7114-bfc8-498e-9d0a-555768a47856] Running
	I0914 01:08:57.922313  935082 system_pods.go:61] "etcd-ha-401927-m02" [4b864657-fa03-4dad-b674-9df356400fcc] Running
	I0914 01:08:57.922333  935082 system_pods.go:61] "etcd-ha-401927-m03" [f1fe26e1-f2c5-4ce3-aa09-d60fce4a3f9e] Running
	I0914 01:08:57.922378  935082 system_pods.go:61] "kindnet-2sh8d" [fec8d264-c2d4-4b2a-9a2e-1774b06b063e] Running
	I0914 01:08:57.922399  935082 system_pods.go:61] "kindnet-5kpgh" [4ed123cd-5c25-4428-b2e6-7eace0cf37ad] Running
	I0914 01:08:57.922417  935082 system_pods.go:61] "kindnet-b9pww" [92d62cf2-61e4-4a9f-b326-d1609a0c8d8f] Running
	I0914 01:08:57.922435  935082 system_pods.go:61] "kindnet-wx4k5" [a1131bfa-790b-4dc4-8283-311e88b5bdbc] Running
	I0914 01:08:57.922477  935082 system_pods.go:61] "kube-apiserver-ha-401927" [1833c9c8-5e1d-4567-86f9-3ef5f2251c9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:08:57.922501  935082 system_pods.go:61] "kube-apiserver-ha-401927-m02" [a7c319a5-0176-4be3-937f-c8f34a138496] Running
	I0914 01:08:57.922521  935082 system_pods.go:61] "kube-apiserver-ha-401927-m03" [7f601b16-91c2-455f-8291-c697b723db05] Running
	I0914 01:08:57.922543  935082 system_pods.go:61] "kube-controller-manager-ha-401927" [7e921f06-cbe6-4cb8-98c1-975114150047] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:08:57.922563  935082 system_pods.go:61] "kube-controller-manager-ha-401927-m02" [a79794ae-bd5f-4e30-9da9-8db6cf535c9c] Running
	I0914 01:08:57.922601  935082 system_pods.go:61] "kube-controller-manager-ha-401927-m03" [32e54d91-fd5c-4e86-98f2-1d6ca25e5f30] Running
	I0914 01:08:57.922619  935082 system_pods.go:61] "kube-proxy-bx82b" [5382aed9-d68e-45e8-a0db-87e597f4f06e] Running
	I0914 01:08:57.922640  935082 system_pods.go:61] "kube-proxy-dh9sg" [cf78d7a6-1715-4e41-8c73-d518cca00e8c] Running
	I0914 01:08:57.922659  935082 system_pods.go:61] "kube-proxy-mbxw6" [f179be08-4b68-4f9e-821f-879d8e24bb6a] Running
	I0914 01:08:57.922686  935082 system_pods.go:61] "kube-proxy-vb5lf" [cfa118d9-4171-49d6-9647-32f954ed1900] Running
	I0914 01:08:57.922714  935082 system_pods.go:61] "kube-scheduler-ha-401927" [ddbafcb3-a99a-41ff-aa7c-60023fa7de37] Running
	I0914 01:08:57.922732  935082 system_pods.go:61] "kube-scheduler-ha-401927-m02" [a19754b8-76cf-4a0e-9cc0-e7ade1a584a6] Running
	I0914 01:08:57.922749  935082 system_pods.go:61] "kube-scheduler-ha-401927-m03" [46f6d580-ab48-49fe-848f-31d703018258] Running
	I0914 01:08:57.922769  935082 system_pods.go:61] "kube-vip-ha-401927" [27223dac-80ea-4c49-9197-ecd48ff2c707] Running
	I0914 01:08:57.922794  935082 system_pods.go:61] "kube-vip-ha-401927-m02" [00641bc6-8e38-4c37-9a91-a77979faf9c4] Running
	I0914 01:08:57.922816  935082 system_pods.go:61] "kube-vip-ha-401927-m03" [8595237c-8f37-45b5-8d8e-8858c4a7e89e] Running
	I0914 01:08:57.922839  935082 system_pods.go:61] "storage-provisioner" [9d087fa0-0445-40f0-bd37-3d5b56bbe9d3] Running
	I0914 01:08:57.922861  935082 system_pods.go:74] duration metric: took 3.82240128s to wait for pod list to return data ...
	I0914 01:08:57.922890  935082 default_sa.go:34] waiting for default service account to be created ...
	I0914 01:08:57.923019  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0914 01:08:57.923044  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:57.923067  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:57.923087  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:57.930536  935082 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 01:08:57.930821  935082 default_sa.go:45] found service account: "default"
	I0914 01:08:57.930841  935082 default_sa.go:55] duration metric: took 7.929774ms for default service account to be created ...
	I0914 01:08:57.930851  935082 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 01:08:57.930919  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 01:08:57.930929  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:57.930937  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:57.930943  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:57.938006  935082 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0914 01:08:57.952879  935082 system_pods.go:86] 26 kube-system pods found
	I0914 01:08:57.952921  935082 system_pods.go:89] "coredns-7c65d6cfc9-ghkt8" [b2f4b69f-d357-4070-b350-4c0b724d8d16] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:08:57.952932  935082 system_pods.go:89] "coredns-7c65d6cfc9-zrv9t" [bfa997e2-9cf0-4e8f-8f3e-eb24773c7288] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0914 01:08:57.952939  935082 system_pods.go:89] "etcd-ha-401927" [7edf7114-bfc8-498e-9d0a-555768a47856] Running
	I0914 01:08:57.952946  935082 system_pods.go:89] "etcd-ha-401927-m02" [4b864657-fa03-4dad-b674-9df356400fcc] Running
	I0914 01:08:57.952951  935082 system_pods.go:89] "etcd-ha-401927-m03" [f1fe26e1-f2c5-4ce3-aa09-d60fce4a3f9e] Running
	I0914 01:08:57.952956  935082 system_pods.go:89] "kindnet-2sh8d" [fec8d264-c2d4-4b2a-9a2e-1774b06b063e] Running
	I0914 01:08:57.952960  935082 system_pods.go:89] "kindnet-5kpgh" [4ed123cd-5c25-4428-b2e6-7eace0cf37ad] Running
	I0914 01:08:57.952965  935082 system_pods.go:89] "kindnet-b9pww" [92d62cf2-61e4-4a9f-b326-d1609a0c8d8f] Running
	I0914 01:08:57.952970  935082 system_pods.go:89] "kindnet-wx4k5" [a1131bfa-790b-4dc4-8283-311e88b5bdbc] Running
	I0914 01:08:57.952977  935082 system_pods.go:89] "kube-apiserver-ha-401927" [1833c9c8-5e1d-4567-86f9-3ef5f2251c9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0914 01:08:57.952984  935082 system_pods.go:89] "kube-apiserver-ha-401927-m02" [a7c319a5-0176-4be3-937f-c8f34a138496] Running
	I0914 01:08:57.952989  935082 system_pods.go:89] "kube-apiserver-ha-401927-m03" [7f601b16-91c2-455f-8291-c697b723db05] Running
	I0914 01:08:57.952997  935082 system_pods.go:89] "kube-controller-manager-ha-401927" [7e921f06-cbe6-4cb8-98c1-975114150047] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0914 01:08:57.953002  935082 system_pods.go:89] "kube-controller-manager-ha-401927-m02" [a79794ae-bd5f-4e30-9da9-8db6cf535c9c] Running
	I0914 01:08:57.953008  935082 system_pods.go:89] "kube-controller-manager-ha-401927-m03" [32e54d91-fd5c-4e86-98f2-1d6ca25e5f30] Running
	I0914 01:08:57.953013  935082 system_pods.go:89] "kube-proxy-bx82b" [5382aed9-d68e-45e8-a0db-87e597f4f06e] Running
	I0914 01:08:57.953017  935082 system_pods.go:89] "kube-proxy-dh9sg" [cf78d7a6-1715-4e41-8c73-d518cca00e8c] Running
	I0914 01:08:57.953021  935082 system_pods.go:89] "kube-proxy-mbxw6" [f179be08-4b68-4f9e-821f-879d8e24bb6a] Running
	I0914 01:08:57.953025  935082 system_pods.go:89] "kube-proxy-vb5lf" [cfa118d9-4171-49d6-9647-32f954ed1900] Running
	I0914 01:08:57.953029  935082 system_pods.go:89] "kube-scheduler-ha-401927" [ddbafcb3-a99a-41ff-aa7c-60023fa7de37] Running
	I0914 01:08:57.953033  935082 system_pods.go:89] "kube-scheduler-ha-401927-m02" [a19754b8-76cf-4a0e-9cc0-e7ade1a584a6] Running
	I0914 01:08:57.953037  935082 system_pods.go:89] "kube-scheduler-ha-401927-m03" [46f6d580-ab48-49fe-848f-31d703018258] Running
	I0914 01:08:57.953041  935082 system_pods.go:89] "kube-vip-ha-401927" [27223dac-80ea-4c49-9197-ecd48ff2c707] Running
	I0914 01:08:57.953046  935082 system_pods.go:89] "kube-vip-ha-401927-m02" [00641bc6-8e38-4c37-9a91-a77979faf9c4] Running
	I0914 01:08:57.953050  935082 system_pods.go:89] "kube-vip-ha-401927-m03" [8595237c-8f37-45b5-8d8e-8858c4a7e89e] Running
	I0914 01:08:57.953054  935082 system_pods.go:89] "storage-provisioner" [9d087fa0-0445-40f0-bd37-3d5b56bbe9d3] Running
	I0914 01:08:57.953060  935082 system_pods.go:126] duration metric: took 22.197656ms to wait for k8s-apps to be running ...
	I0914 01:08:57.953067  935082 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:08:57.953124  935082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:08:57.971683  935082 system_svc.go:56] duration metric: took 18.604363ms WaitForService to wait for kubelet
	I0914 01:08:57.971714  935082 kubeadm.go:582] duration metric: took 1m8.630239785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:08:57.971735  935082 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:08:57.971811  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0914 01:08:57.971822  935082 round_trippers.go:469] Request Headers:
	I0914 01:08:57.971832  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:08:57.971838  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:08:57.984255  935082 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0914 01:08:57.985592  935082 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 01:08:57.985623  935082 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:57.985635  935082 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 01:08:57.985640  935082 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:57.985645  935082 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 01:08:57.985648  935082 node_conditions.go:123] node cpu capacity is 2
	I0914 01:08:57.985653  935082 node_conditions.go:105] duration metric: took 13.914134ms to run NodePressure ...
	I0914 01:08:57.985670  935082 start.go:241] waiting for startup goroutines ...
	I0914 01:08:57.985697  935082 start.go:255] writing updated cluster config ...
	I0914 01:08:57.988965  935082 out.go:201] 
	I0914 01:08:57.992021  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:08:57.992144  935082 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/config.json ...
	I0914 01:08:57.995106  935082 out.go:177] * Starting "ha-401927-m04" worker node in "ha-401927" cluster
	I0914 01:08:57.998550  935082 cache.go:121] Beginning downloading kic base image for docker with crio
	I0914 01:08:58.003964  935082 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0914 01:08:58.006856  935082 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 01:08:58.006900  935082 cache.go:56] Caching tarball of preloaded images
	I0914 01:08:58.006942  935082 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 01:08:58.007014  935082 preload.go:172] Found /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0914 01:08:58.007026  935082 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0914 01:08:58.007173  935082 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/config.json ...
	W0914 01:08:58.026879  935082 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 is of wrong architecture
	I0914 01:08:58.026902  935082 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 01:08:58.026985  935082 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 01:08:58.027015  935082 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 01:08:58.027033  935082 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 01:08:58.027047  935082 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 01:08:58.027060  935082 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0914 01:08:58.028353  935082 image.go:273] response: 
	I0914 01:08:58.151808  935082 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0914 01:08:58.151851  935082 cache.go:194] Successfully downloaded all kic artifacts
	I0914 01:08:58.151886  935082 start.go:360] acquireMachinesLock for ha-401927-m04: {Name:mkba8e7c51b5df37a36dc2300982c160abe6dc52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:08:58.151962  935082 start.go:364] duration metric: took 53.816µs to acquireMachinesLock for "ha-401927-m04"
	I0914 01:08:58.151987  935082 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:08:58.151999  935082 fix.go:54] fixHost starting: m04
	I0914 01:08:58.152277  935082 cli_runner.go:164] Run: docker container inspect ha-401927-m04 --format={{.State.Status}}
	I0914 01:08:58.169184  935082 fix.go:112] recreateIfNeeded on ha-401927-m04: state=Stopped err=<nil>
	W0914 01:08:58.169212  935082 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:08:58.171770  935082 out.go:177] * Restarting existing docker container for "ha-401927-m04" ...
	I0914 01:08:58.174579  935082 cli_runner.go:164] Run: docker start ha-401927-m04
	I0914 01:08:58.496266  935082 cli_runner.go:164] Run: docker container inspect ha-401927-m04 --format={{.State.Status}}
	I0914 01:08:58.515278  935082 kic.go:430] container "ha-401927-m04" state is running.
	I0914 01:08:58.515826  935082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927-m04
	I0914 01:08:58.540107  935082 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/config.json ...
	I0914 01:08:58.541179  935082 machine.go:93] provisionDockerMachine start ...
	I0914 01:08:58.541469  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m04
	I0914 01:08:58.562948  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:08:58.563485  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33634 <nil> <nil>}
	I0914 01:08:58.563503  935082 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:08:58.564241  935082 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0914 01:09:01.693507  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401927-m04
	
	I0914 01:09:01.693608  935082 ubuntu.go:169] provisioning hostname "ha-401927-m04"
	I0914 01:09:01.693699  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m04
	I0914 01:09:01.714615  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:09:01.714905  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33634 <nil> <nil>}
	I0914 01:09:01.714923  935082 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-401927-m04 && echo "ha-401927-m04" | sudo tee /etc/hostname
	I0914 01:09:01.872392  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-401927-m04
	
	I0914 01:09:01.872523  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m04
	I0914 01:09:01.891561  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:09:01.891833  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33634 <nil> <nil>}
	I0914 01:09:01.891857  935082 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-401927-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-401927-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-401927-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:09:02.018081  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:09:02.018111  935082 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-868698/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-868698/.minikube}
	I0914 01:09:02.018131  935082 ubuntu.go:177] setting up certificates
	I0914 01:09:02.018143  935082 provision.go:84] configureAuth start
	I0914 01:09:02.018211  935082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927-m04
	I0914 01:09:02.039349  935082 provision.go:143] copyHostCerts
	I0914 01:09:02.039801  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem
	I0914 01:09:02.039871  935082 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem, removing ...
	I0914 01:09:02.039883  935082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem
	I0914 01:09:02.039968  935082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/ca.pem (1078 bytes)
	I0914 01:09:02.040065  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem
	I0914 01:09:02.040092  935082 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem, removing ...
	I0914 01:09:02.040097  935082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem
	I0914 01:09:02.040132  935082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/cert.pem (1123 bytes)
	I0914 01:09:02.040180  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem
	I0914 01:09:02.040201  935082 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem, removing ...
	I0914 01:09:02.040209  935082 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem
	I0914 01:09:02.040234  935082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-868698/.minikube/key.pem (1679 bytes)
	I0914 01:09:02.040288  935082 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem org=jenkins.ha-401927-m04 san=[127.0.0.1 192.168.49.5 ha-401927-m04 localhost minikube]
	I0914 01:09:02.406878  935082 provision.go:177] copyRemoteCerts
	I0914 01:09:02.406979  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:09:02.407048  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m04
	I0914 01:09:02.425101  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33634 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m04/id_rsa Username:docker}
	I0914 01:09:02.515858  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0914 01:09:02.515925  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 01:09:02.544370  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0914 01:09:02.544437  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:09:02.570563  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0914 01:09:02.570624  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:09:02.597762  935082 provision.go:87] duration metric: took 579.603096ms to configureAuth
	I0914 01:09:02.597790  935082 ubuntu.go:193] setting minikube options for container-runtime
	I0914 01:09:02.598029  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:09:02.598140  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m04
	I0914 01:09:02.619798  935082 main.go:141] libmachine: Using SSH client type: native
	I0914 01:09:02.620041  935082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33634 <nil> <nil>}
	I0914 01:09:02.620059  935082 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0914 01:09:02.915120  935082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0914 01:09:02.915146  935082 machine.go:96] duration metric: took 4.373951307s to provisionDockerMachine
	I0914 01:09:02.915158  935082 start.go:293] postStartSetup for "ha-401927-m04" (driver="docker")
	I0914 01:09:02.915169  935082 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:09:02.915243  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:09:02.915322  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m04
	I0914 01:09:02.936413  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33634 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m04/id_rsa Username:docker}
	I0914 01:09:03.027386  935082 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:09:03.031712  935082 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 01:09:03.031751  935082 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 01:09:03.031762  935082 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 01:09:03.031769  935082 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 01:09:03.031781  935082 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/addons for local assets ...
	I0914 01:09:03.031854  935082 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-868698/.minikube/files for local assets ...
	I0914 01:09:03.031935  935082 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem -> 8740792.pem in /etc/ssl/certs
	I0914 01:09:03.031947  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem -> /etc/ssl/certs/8740792.pem
	I0914 01:09:03.032053  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:09:03.044678  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem --> /etc/ssl/certs/8740792.pem (1708 bytes)
	I0914 01:09:03.075604  935082 start.go:296] duration metric: took 160.428964ms for postStartSetup
	I0914 01:09:03.075748  935082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 01:09:03.075822  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m04
	I0914 01:09:03.098081  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33634 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m04/id_rsa Username:docker}
	I0914 01:09:03.185323  935082 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 01:09:03.190030  935082 fix.go:56] duration metric: took 5.038024826s for fixHost
	I0914 01:09:03.190056  935082 start.go:83] releasing machines lock for "ha-401927-m04", held for 5.038082022s
	I0914 01:09:03.190126  935082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927-m04
	I0914 01:09:03.209524  935082 out.go:177] * Found network options:
	I0914 01:09:03.212114  935082 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0914 01:09:03.214599  935082 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 01:09:03.214628  935082 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 01:09:03.214652  935082 proxy.go:119] fail to check proxy env: Error ip not in block
	W0914 01:09:03.214663  935082 proxy.go:119] fail to check proxy env: Error ip not in block
	I0914 01:09:03.214732  935082 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0914 01:09:03.214782  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m04
	I0914 01:09:03.215063  935082 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:09:03.215125  935082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m04
	I0914 01:09:03.239257  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33634 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m04/id_rsa Username:docker}
	I0914 01:09:03.247255  935082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33634 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m04/id_rsa Username:docker}
	I0914 01:09:03.492728  935082 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 01:09:03.497365  935082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:09:03.507564  935082 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0914 01:09:03.507638  935082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:09:03.518970  935082 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 01:09:03.519003  935082 start.go:495] detecting cgroup driver to use...
	I0914 01:09:03.519035  935082 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 01:09:03.519083  935082 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0914 01:09:03.533614  935082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0914 01:09:03.546399  935082 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:09:03.546460  935082 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:09:03.560098  935082 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:09:03.574500  935082 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:09:03.660488  935082 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:09:03.755646  935082 docker.go:233] disabling docker service ...
	I0914 01:09:03.755741  935082 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:09:03.769211  935082 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:09:03.781511  935082 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:09:03.884944  935082 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:09:03.991067  935082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:09:04.006229  935082 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:09:04.025787  935082 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0914 01:09:04.025923  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:09:04.036163  935082 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0914 01:09:04.036270  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:09:04.047275  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:09:04.058482  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:09:04.068842  935082 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:09:04.085907  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:09:04.107587  935082 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:09:04.121515  935082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0914 01:09:04.132546  935082 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:09:04.141831  935082 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:09:04.150967  935082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:09:04.233595  935082 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0914 01:09:04.348688  935082 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0914 01:09:04.348768  935082 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0914 01:09:04.352558  935082 start.go:563] Will wait 60s for crictl version
	I0914 01:09:04.352629  935082 ssh_runner.go:195] Run: which crictl
	I0914 01:09:04.356125  935082 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:09:04.395911  935082 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0914 01:09:04.396064  935082 ssh_runner.go:195] Run: crio --version
	I0914 01:09:04.436769  935082 ssh_runner.go:195] Run: crio --version
	I0914 01:09:04.484677  935082 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0914 01:09:04.487372  935082 out.go:177]   - env NO_PROXY=192.168.49.2
	I0914 01:09:04.489934  935082 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0914 01:09:04.492652  935082 cli_runner.go:164] Run: docker network inspect ha-401927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 01:09:04.511178  935082 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 01:09:04.515070  935082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:09:04.527759  935082 mustload.go:65] Loading cluster: ha-401927
	I0914 01:09:04.528014  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:09:04.528268  935082 cli_runner.go:164] Run: docker container inspect ha-401927 --format={{.State.Status}}
	I0914 01:09:04.549678  935082 host.go:66] Checking if "ha-401927" exists ...
	I0914 01:09:04.549966  935082 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927 for IP: 192.168.49.5
	I0914 01:09:04.549981  935082 certs.go:194] generating shared ca certs ...
	I0914 01:09:04.549997  935082 certs.go:226] acquiring lock for ca certs: {Name:mk51aad7f25871620dee3805dbb159a74d927d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:09:04.550113  935082 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key
	I0914 01:09:04.550158  935082 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key
	I0914 01:09:04.550172  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0914 01:09:04.550185  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0914 01:09:04.550202  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0914 01:09:04.550216  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0914 01:09:04.550271  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079.pem (1338 bytes)
	W0914 01:09:04.550300  935082 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079_empty.pem, impossibly tiny 0 bytes
	I0914 01:09:04.550309  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 01:09:04.550345  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:09:04.550372  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:09:04.550397  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/key.pem (1679 bytes)
	I0914 01:09:04.550442  935082 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem (1708 bytes)
	I0914 01:09:04.550471  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem -> /usr/share/ca-certificates/8740792.pem
	I0914 01:09:04.550492  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:09:04.550503  935082 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079.pem -> /usr/share/ca-certificates/874079.pem
	I0914 01:09:04.550524  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:09:04.576206  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 01:09:04.602576  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:09:04.630527  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 01:09:04.658821  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/ssl/certs/8740792.pem --> /usr/share/ca-certificates/8740792.pem (1708 bytes)
	I0914 01:09:04.684153  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:09:04.711764  935082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-868698/.minikube/certs/874079.pem --> /usr/share/ca-certificates/874079.pem (1338 bytes)
	I0914 01:09:04.742880  935082 ssh_runner.go:195] Run: openssl version
	I0914 01:09:04.750583  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/874079.pem && ln -fs /usr/share/ca-certificates/874079.pem /etc/ssl/certs/874079.pem"
	I0914 01:09:04.761453  935082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/874079.pem
	I0914 01:09:04.765813  935082 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 00:55 /usr/share/ca-certificates/874079.pem
	I0914 01:09:04.765882  935082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/874079.pem
	I0914 01:09:04.773075  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/874079.pem /etc/ssl/certs/51391683.0"
	I0914 01:09:04.782623  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8740792.pem && ln -fs /usr/share/ca-certificates/8740792.pem /etc/ssl/certs/8740792.pem"
	I0914 01:09:04.793896  935082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8740792.pem
	I0914 01:09:04.797881  935082 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 00:55 /usr/share/ca-certificates/8740792.pem
	I0914 01:09:04.797947  935082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8740792.pem
	I0914 01:09:04.805081  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8740792.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:09:04.816735  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:09:04.828534  935082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:09:04.832126  935082 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 00:35 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:09:04.832190  935082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:09:04.845393  935082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:09:04.855517  935082 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:09:04.859901  935082 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 01:09:04.859952  935082 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I0914 01:09:04.860039  935082 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-401927-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-401927 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:09:04.860117  935082 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:09:04.869620  935082 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:09:04.869701  935082 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0914 01:09:04.878989  935082 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0914 01:09:04.898813  935082 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:09:04.919304  935082 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0914 01:09:04.923464  935082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:09:04.935208  935082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:09:05.031553  935082 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:09:05.043373  935082 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0914 01:09:05.043878  935082 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:09:05.046974  935082 out.go:177] * Verifying Kubernetes components...
	I0914 01:09:05.049721  935082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:09:05.149706  935082 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:09:05.162873  935082 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 01:09:05.163162  935082 kapi.go:59] client config for ha-401927: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/client.crt", KeyFile:"/home/jenkins/minikube-integration/19640-868698/.minikube/profiles/ha-401927/client.key", CAFile:"/home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1e6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0914 01:09:05.163239  935082 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0914 01:09:05.163503  935082 node_ready.go:35] waiting up to 6m0s for node "ha-401927-m04" to be "Ready" ...
	I0914 01:09:05.163583  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:05.163591  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:05.163600  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:05.163613  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:05.166726  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:05.664657  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:05.664676  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:05.664686  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:05.664692  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:05.668103  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:06.164146  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:06.164217  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:06.164240  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:06.164259  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:06.167034  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:06.664355  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:06.664376  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:06.664386  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:06.664391  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:06.668765  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:09:07.164746  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:07.164767  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:07.164777  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:07.164784  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:07.183946  935082 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0914 01:09:07.185084  935082 node_ready.go:53] node "ha-401927-m04" has status "Ready":"Unknown"
	I0914 01:09:07.663716  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:07.663737  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:07.663746  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:07.663752  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:07.668229  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:09:08.163696  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:08.163769  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:08.163791  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:08.163811  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:08.169033  935082 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 01:09:08.664418  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:08.664437  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:08.664447  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:08.664453  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:08.667156  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:09.164646  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:09.164666  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:09.164675  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:09.164680  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:09.170044  935082 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 01:09:09.664635  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:09.664658  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:09.664667  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:09.664671  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:09.667329  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:09.667996  935082 node_ready.go:53] node "ha-401927-m04" has status "Ready":"Unknown"
	I0914 01:09:10.163768  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:10.163800  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:10.163810  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:10.163816  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:10.168847  935082 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 01:09:10.663952  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:10.663980  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:10.663993  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:10.663997  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:10.667581  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:11.163940  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:11.163961  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:11.163969  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:11.163973  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:11.166833  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:11.663757  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:11.663782  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:11.663790  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:11.663794  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:11.666814  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:12.163858  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:12.163884  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:12.163895  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:12.163900  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:12.172798  935082 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0914 01:09:12.173604  935082 node_ready.go:53] node "ha-401927-m04" has status "Ready":"Unknown"
	I0914 01:09:12.664354  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:12.664375  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:12.664385  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:12.664390  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:12.667459  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:12.668233  935082 node_ready.go:49] node "ha-401927-m04" has status "Ready":"True"
	I0914 01:09:12.668258  935082 node_ready.go:38] duration metric: took 7.504739053s for node "ha-401927-m04" to be "Ready" ...
	I0914 01:09:12.668269  935082 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:09:12.668341  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0914 01:09:12.668354  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:12.668363  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:12.668370  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:12.673622  935082 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 01:09:12.684375  935082 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:12.684590  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:12.684618  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:12.684634  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:12.684639  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:12.687793  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:12.688498  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:12.688518  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:12.688527  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:12.688531  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:12.692397  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:13.185307  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:13.185333  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:13.185341  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:13.185347  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:13.188159  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:13.189116  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:13.189140  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:13.189151  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:13.189156  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:13.191610  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:13.684645  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:13.684669  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:13.684677  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:13.684684  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:13.687556  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:13.688717  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:13.688735  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:13.688745  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:13.688750  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:13.691661  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:14.184851  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:14.184876  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:14.184886  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:14.184891  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:14.187834  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:14.188598  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:14.188616  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:14.188625  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:14.188631  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:14.191390  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:14.684660  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:14.684682  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:14.684691  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:14.684695  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:14.687615  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:14.688440  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:14.688461  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:14.688469  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:14.688473  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:14.692215  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:14.693331  935082 pod_ready.go:103] pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace has status "Ready":"False"
	I0914 01:09:15.184731  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:15.184754  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:15.184764  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:15.184769  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:15.187636  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:15.188603  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:15.188628  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:15.188642  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:15.188646  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:15.191215  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:15.684599  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:15.684624  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:15.684634  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:15.684638  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:15.687670  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:15.688612  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:15.688628  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:15.688638  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:15.688643  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:15.691831  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:16.184901  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:16.184922  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:16.184931  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:16.184935  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:16.187973  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:16.188779  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:16.188800  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:16.188810  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:16.188816  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:16.191875  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:16.684949  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:16.684973  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:16.684983  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:16.684987  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:16.687802  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:16.688684  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:16.688704  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:16.688713  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:16.688717  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:16.691906  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:17.185226  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:17.185246  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:17.185281  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:17.185286  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:17.188638  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:17.189854  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:17.189873  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:17.189885  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:17.189889  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:17.194681  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:09:17.195622  935082 pod_ready.go:103] pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace has status "Ready":"False"
	I0914 01:09:17.684941  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:17.684968  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:17.684978  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:17.684982  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:17.688489  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:17.689779  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:17.689798  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:17.689808  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:17.689812  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:17.692677  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:18.184786  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:18.184809  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:18.184818  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:18.184823  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:18.187818  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:18.188775  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:18.188797  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:18.188807  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:18.188812  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:18.191285  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:18.684589  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:18.684614  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:18.684624  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:18.684630  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:18.687550  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:18.688315  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:18.688331  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:18.688342  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:18.688363  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:18.692089  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:19.184825  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:19.184848  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:19.184858  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:19.184862  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:19.187740  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:19.188513  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:19.188533  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:19.188543  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:19.188547  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:19.191231  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:19.685311  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:19.685335  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:19.685346  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:19.685351  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:19.688563  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:19.689278  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:19.689297  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:19.689306  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:19.689311  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:19.692671  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:19.693513  935082 pod_ready.go:103] pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace has status "Ready":"False"
	I0914 01:09:20.185305  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:20.185330  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:20.185339  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:20.185343  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:20.188613  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:20.189679  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:20.189746  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:20.189769  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:20.189789  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:20.192149  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:20.684713  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:20.684741  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:20.684756  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:20.684761  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:20.687575  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:20.688594  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:20.688612  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:20.688621  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:20.688626  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:20.691881  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:21.184694  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:21.184719  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:21.184728  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:21.184732  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:21.187595  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:21.188609  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:21.188628  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:21.188638  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:21.188643  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:21.191204  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:21.684734  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:21.684758  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:21.684766  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:21.684771  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:21.687986  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:21.688799  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:21.688820  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:21.688830  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:21.688836  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:21.692038  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:22.185361  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:22.185383  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:22.185393  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:22.185397  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:22.188196  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:22.188979  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:22.188997  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:22.189007  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:22.189012  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:22.191639  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:22.192285  935082 pod_ready.go:103] pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace has status "Ready":"False"
	I0914 01:09:22.684668  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:22.684691  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:22.684701  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:22.684705  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:22.687687  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:22.688981  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:22.689001  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:22.689011  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:22.689023  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:22.697564  935082 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0914 01:09:23.185408  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:23.185432  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:23.185442  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:23.185446  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:23.188333  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:23.189148  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:23.189167  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:23.189176  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:23.189181  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:23.191714  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:23.684851  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:23.684875  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:23.684885  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:23.684889  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:23.687788  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:23.688664  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:23.688685  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:23.688694  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:23.688699  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:23.691632  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:24.184659  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:24.184684  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:24.184693  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:24.184698  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:24.187809  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:24.188589  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:24.188607  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:24.188616  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:24.188623  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:24.191197  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:24.685396  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:24.685419  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:24.685428  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:24.685434  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:24.688245  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:24.689010  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:24.689031  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:24.689041  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:24.689044  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:24.691994  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:24.692658  935082 pod_ready.go:103] pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace has status "Ready":"False"
	I0914 01:09:25.184750  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:25.184773  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:25.184784  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:25.184789  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:25.187699  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:25.188509  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:25.188536  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:25.188546  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:25.188550  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:25.191173  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:25.685606  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:25.685626  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:25.685636  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:25.685641  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:25.688735  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:25.689710  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:25.689732  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:25.689742  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:25.689749  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:25.692526  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:26.184581  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:26.184603  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:26.184613  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:26.184618  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:26.187475  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:26.188408  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:26.188429  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:26.188442  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:26.188448  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:26.190920  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:26.685036  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:26.685058  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:26.685067  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:26.685072  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:26.687953  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:26.688819  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:26.688840  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:26.688851  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:26.688857  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:26.691359  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:27.184657  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghkt8
	I0914 01:09:27.184685  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.184697  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.184702  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.194965  935082 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0914 01:09:27.196177  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:27.196193  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.196202  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.196208  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.199228  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:27.199804  935082 pod_ready.go:98] node "ha-401927" hosting pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:27.199822  935082 pod_ready.go:82] duration metric: took 14.515407793s for pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:27.199833  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927" hosting pod "coredns-7c65d6cfc9-ghkt8" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:27.199841  935082 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zrv9t" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:27.199910  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zrv9t
	I0914 01:09:27.199915  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.199923  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.199927  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.205359  935082 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0914 01:09:27.206637  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:27.206654  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.206663  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.206670  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.210674  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:27.211740  935082 pod_ready.go:98] node "ha-401927" hosting pod "coredns-7c65d6cfc9-zrv9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:27.211762  935082 pod_ready.go:82] duration metric: took 11.91382ms for pod "coredns-7c65d6cfc9-zrv9t" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:27.211772  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927" hosting pod "coredns-7c65d6cfc9-zrv9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:27.211780  935082 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:27.211848  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401927
	I0914 01:09:27.211853  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.211861  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.211866  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.230336  935082 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0914 01:09:27.232257  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:27.232327  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.232350  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.232372  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.241667  935082 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0914 01:09:27.242903  935082 pod_ready.go:98] node "ha-401927" hosting pod "etcd-ha-401927" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:27.242983  935082 pod_ready.go:82] duration metric: took 31.1955ms for pod "etcd-ha-401927" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:27.243009  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927" hosting pod "etcd-ha-401927" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:27.243040  935082 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:27.243145  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401927-m02
	I0914 01:09:27.243168  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.243190  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.243213  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.247344  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:09:27.248895  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:09:27.248960  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.248983  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.249006  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.265156  935082 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0914 01:09:27.266220  935082 pod_ready.go:93] pod "etcd-ha-401927-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 01:09:27.266284  935082 pod_ready.go:82] duration metric: took 23.2173ms for pod "etcd-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:27.266309  935082 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:27.266403  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-401927-m03
	I0914 01:09:27.266428  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.266450  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.266471  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.281959  935082 round_trippers.go:574] Response Status: 404 Not Found in 15 milliseconds
	I0914 01:09:27.282233  935082 pod_ready.go:98] error getting pod "etcd-ha-401927-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-401927-m03" not found
	I0914 01:09:27.282277  935082 pod_ready.go:82] duration metric: took 15.948105ms for pod "etcd-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:27.282303  935082 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "etcd-ha-401927-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-401927-m03" not found
	I0914 01:09:27.282350  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:27.282458  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927
	I0914 01:09:27.282483  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.282507  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.282527  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.301179  935082 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0914 01:09:27.385340  935082 request.go:632] Waited for 83.206561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:27.385453  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:27.385474  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.385484  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.385489  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.388258  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:27.388911  935082 pod_ready.go:98] node "ha-401927" hosting pod "kube-apiserver-ha-401927" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:27.388934  935082 pod_ready.go:82] duration metric: took 106.558003ms for pod "kube-apiserver-ha-401927" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:27.388945  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927" hosting pod "kube-apiserver-ha-401927" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:27.388953  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:27.585378  935082 request.go:632] Waited for 196.353972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927-m02
	I0914 01:09:27.585491  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927-m02
	I0914 01:09:27.585523  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.585552  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.585571  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.588633  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:27.784709  935082 request.go:632] Waited for 195.321224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:09:27.784789  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:09:27.784796  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.784805  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.784810  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.787845  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:27.788543  935082 pod_ready.go:93] pod "kube-apiserver-ha-401927-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 01:09:27.788563  935082 pod_ready.go:82] duration metric: took 399.598324ms for pod "kube-apiserver-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:27.788583  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:27.984831  935082 request.go:632] Waited for 196.173693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927-m03
	I0914 01:09:27.984908  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927-m03
	I0914 01:09:27.984915  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:27.984923  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:27.984934  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:27.987724  935082 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0914 01:09:27.987839  935082 pod_ready.go:98] error getting pod "kube-apiserver-ha-401927-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-401927-m03" not found
	I0914 01:09:27.987873  935082 pod_ready.go:82] duration metric: took 199.261333ms for pod "kube-apiserver-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:27.987883  935082 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-ha-401927-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-401927-m03" not found
	I0914 01:09:27.987889  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:28.185596  935082 request.go:632] Waited for 197.624276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927
	I0914 01:09:28.185678  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927
	I0914 01:09:28.185691  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:28.185704  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:28.185713  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:28.188551  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:28.385578  935082 request.go:632] Waited for 196.197856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:28.385660  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:28.385686  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:28.385703  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:28.385722  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:28.388574  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:28.389284  935082 pod_ready.go:98] node "ha-401927" hosting pod "kube-controller-manager-ha-401927" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:28.389308  935082 pod_ready.go:82] duration metric: took 401.41164ms for pod "kube-controller-manager-ha-401927" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:28.389320  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927" hosting pod "kube-controller-manager-ha-401927" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:28.389328  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:28.585614  935082 request.go:632] Waited for 196.207678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927-m02
	I0914 01:09:28.585690  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927-m02
	I0914 01:09:28.585699  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:28.585708  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:28.585725  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:28.588904  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:28.785032  935082 request.go:632] Waited for 195.339366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:09:28.785095  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:09:28.785106  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:28.785115  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:28.785127  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:28.788676  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:28.789309  935082 pod_ready.go:93] pod "kube-controller-manager-ha-401927-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 01:09:28.789357  935082 pod_ready.go:82] duration metric: took 400.015357ms for pod "kube-controller-manager-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:28.789376  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:28.985098  935082 request.go:632] Waited for 195.645279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927-m03
	I0914 01:09:28.985192  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-401927-m03
	I0914 01:09:28.985200  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:28.985210  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:28.985221  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:28.988143  935082 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0914 01:09:28.988297  935082 pod_ready.go:98] error getting pod "kube-controller-manager-ha-401927-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-401927-m03" not found
	I0914 01:09:28.988316  935082 pod_ready.go:82] duration metric: took 198.931576ms for pod "kube-controller-manager-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:28.988327  935082 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-401927-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-401927-m03" not found
	I0914 01:09:28.988340  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bx82b" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:29.184697  935082 request.go:632] Waited for 196.285083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bx82b
	I0914 01:09:29.184759  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bx82b
	I0914 01:09:29.184769  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:29.184778  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:29.184785  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:29.188010  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:29.385330  935082 request.go:632] Waited for 196.38186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:29.385451  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m04
	I0914 01:09:29.385489  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:29.385516  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:29.385537  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:29.389676  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:09:29.390930  935082 pod_ready.go:93] pod "kube-proxy-bx82b" in "kube-system" namespace has status "Ready":"True"
	I0914 01:09:29.390954  935082 pod_ready.go:82] duration metric: took 402.604079ms for pod "kube-proxy-bx82b" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:29.390966  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dh9sg" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:29.584918  935082 request.go:632] Waited for 193.883137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dh9sg
	I0914 01:09:29.585011  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dh9sg
	I0914 01:09:29.585024  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:29.585034  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:29.585039  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:29.587864  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:29.784727  935082 request.go:632] Waited for 196.232694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:29.784791  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:29.784802  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:29.784811  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:29.784841  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:29.788992  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:09:29.789692  935082 pod_ready.go:98] node "ha-401927" hosting pod "kube-proxy-dh9sg" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:29.789720  935082 pod_ready.go:82] duration metric: took 398.746217ms for pod "kube-proxy-dh9sg" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:29.789748  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927" hosting pod "kube-proxy-dh9sg" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:29.789763  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mbxw6" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:29.985187  935082 request.go:632] Waited for 195.352173ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbxw6
	I0914 01:09:29.985292  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mbxw6
	I0914 01:09:29.985306  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:29.985315  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:29.985319  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:29.987924  935082 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0914 01:09:29.988103  935082 pod_ready.go:98] error getting pod "kube-proxy-mbxw6" in "kube-system" namespace (skipping!): pods "kube-proxy-mbxw6" not found
	I0914 01:09:29.988139  935082 pod_ready.go:82] duration metric: took 198.365148ms for pod "kube-proxy-mbxw6" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:29.988164  935082 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-proxy-mbxw6" in "kube-system" namespace (skipping!): pods "kube-proxy-mbxw6" not found
	I0914 01:09:29.988186  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vb5lf" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:30.185532  935082 request.go:632] Waited for 197.249598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vb5lf
	I0914 01:09:30.185652  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vb5lf
	I0914 01:09:30.185666  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:30.185681  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:30.185686  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:30.190270  935082 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0914 01:09:30.385234  935082 request.go:632] Waited for 194.313732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:09:30.385333  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:09:30.385347  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:30.385356  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:30.385361  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:30.388232  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:30.389185  935082 pod_ready.go:93] pod "kube-proxy-vb5lf" in "kube-system" namespace has status "Ready":"True"
	I0914 01:09:30.389242  935082 pod_ready.go:82] duration metric: took 401.025771ms for pod "kube-proxy-vb5lf" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:30.389287  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-401927" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:30.585059  935082 request.go:632] Waited for 195.689643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927
	I0914 01:09:30.585129  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927
	I0914 01:09:30.585135  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:30.585144  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:30.585148  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:30.588440  935082 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0914 01:09:30.785611  935082 request.go:632] Waited for 196.241581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:30.785704  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927
	I0914 01:09:30.785765  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:30.785775  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:30.785780  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:30.788456  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:30.789098  935082 pod_ready.go:98] node "ha-401927" hosting pod "kube-scheduler-ha-401927" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:30.789120  935082 pod_ready.go:82] duration metric: took 399.824789ms for pod "kube-scheduler-ha-401927" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:30.789143  935082 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-401927" hosting pod "kube-scheduler-ha-401927" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-401927" has status "Ready":"Unknown"
	I0914 01:09:30.789157  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:30.985566  935082 request.go:632] Waited for 196.342919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927-m02
	I0914 01:09:30.985635  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927-m02
	I0914 01:09:30.985644  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:30.985662  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:30.985675  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:30.988316  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:31.185293  935082 request.go:632] Waited for 196.33163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:09:31.185360  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-401927-m02
	I0914 01:09:31.185369  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:31.185378  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:31.185384  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:31.187973  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:31.188661  935082 pod_ready.go:93] pod "kube-scheduler-ha-401927-m02" in "kube-system" namespace has status "Ready":"True"
	I0914 01:09:31.188681  935082 pod_ready.go:82] duration metric: took 399.516087ms for pod "kube-scheduler-ha-401927-m02" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:31.188718  935082 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	I0914 01:09:31.385054  935082 request.go:632] Waited for 196.263668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927-m03
	I0914 01:09:31.385169  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-401927-m03
	I0914 01:09:31.385182  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:31.385202  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:31.385214  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:31.387913  935082 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0914 01:09:31.388101  935082 pod_ready.go:98] error getting pod "kube-scheduler-ha-401927-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-401927-m03" not found
	I0914 01:09:31.388123  935082 pod_ready.go:82] duration metric: took 199.391504ms for pod "kube-scheduler-ha-401927-m03" in "kube-system" namespace to be "Ready" ...
	E0914 01:09:31.388140  935082 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-401927-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-401927-m03" not found
	I0914 01:09:31.388153  935082 pod_ready.go:39] duration metric: took 18.719873478s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:09:31.388172  935082 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 01:09:31.388230  935082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:09:31.399953  935082 system_svc.go:56] duration metric: took 11.770767ms WaitForService to wait for kubelet
	I0914 01:09:31.399985  935082 kubeadm.go:582] duration metric: took 26.356566687s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:09:31.400010  935082 node_conditions.go:102] verifying NodePressure condition ...
	I0914 01:09:31.585282  935082 request.go:632] Waited for 185.186349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0914 01:09:31.585345  935082 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0914 01:09:31.585351  935082 round_trippers.go:469] Request Headers:
	I0914 01:09:31.585360  935082 round_trippers.go:473]     Accept: application/json, */*
	I0914 01:09:31.585366  935082 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0914 01:09:31.588309  935082 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0914 01:09:31.589917  935082 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 01:09:31.589948  935082 node_conditions.go:123] node cpu capacity is 2
	I0914 01:09:31.589960  935082 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 01:09:31.589967  935082 node_conditions.go:123] node cpu capacity is 2
	I0914 01:09:31.589972  935082 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 01:09:31.589977  935082 node_conditions.go:123] node cpu capacity is 2
	I0914 01:09:31.589983  935082 node_conditions.go:105] duration metric: took 189.966429ms to run NodePressure ...
	I0914 01:09:31.590000  935082 start.go:241] waiting for startup goroutines ...
	I0914 01:09:31.590027  935082 start.go:255] writing updated cluster config ...
	I0914 01:09:31.590360  935082 ssh_runner.go:195] Run: rm -f paused
	I0914 01:09:31.649308  935082 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 01:09:31.652451  935082 out.go:177] * Done! kubectl is now configured to use "ha-401927" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 14 01:09:02 ha-401927 crio[642]: time="2024-09-14 01:09:02.702790967Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b352e8de1f2bd815a7b3d4cbde2dfc054f425f8ad1d057913eb476aa5723dae4/merged/etc/group: no such file or directory"
	Sep 14 01:09:02 ha-401927 crio[642]: time="2024-09-14 01:09:02.750849234Z" level=info msg="Created container 71ef0ec91d294293b4842f8fe4573449801b77baf2282daa4cf102232566a554: kube-system/storage-provisioner/storage-provisioner" id=8cdda35c-b508-4baa-af61-b2803f2c3fa1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 01:09:02 ha-401927 crio[642]: time="2024-09-14 01:09:02.751566033Z" level=info msg="Starting container: 71ef0ec91d294293b4842f8fe4573449801b77baf2282daa4cf102232566a554" id=c3730dbf-3520-47fb-b50e-845d34648477 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 01:09:02 ha-401927 crio[642]: time="2024-09-14 01:09:02.757872502Z" level=info msg="Started container" PID=1834 containerID=71ef0ec91d294293b4842f8fe4573449801b77baf2282daa4cf102232566a554 description=kube-system/storage-provisioner/storage-provisioner id=c3730dbf-3520-47fb-b50e-845d34648477 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef9dbedccb262158ae3bd5e1e0be59fe8a3efd79d369ccb2d43e1121f4e35c04
	Sep 14 01:09:05 ha-401927 crio[642]: time="2024-09-14 01:09:05.453664288Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=0ab0ec49-b614-4fb9-a130-a1805297ec50 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 01:09:05 ha-401927 crio[642]: time="2024-09-14 01:09:05.453871848Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=0ab0ec49-b614-4fb9-a130-a1805297ec50 name=/runtime.v1.ImageService/ImageStatus
	Sep 14 01:09:05 ha-401927 crio[642]: time="2024-09-14 01:09:05.454795822Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=1599462b-b647-4706-8514-25381606416f name=/runtime.v1.ImageService/ImageStatus
	Sep 14 01:09:05 ha-401927 crio[642]: time="2024-09-14 01:09:05.454982123Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=1599462b-b647-4706-8514-25381606416f name=/runtime.v1.ImageService/ImageStatus
	Sep 14 01:09:05 ha-401927 crio[642]: time="2024-09-14 01:09:05.455633480Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-401927/kube-controller-manager" id=fa941024-4906-465d-b731-c9b0bfc282ca name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 01:09:05 ha-401927 crio[642]: time="2024-09-14 01:09:05.455723587Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 14 01:09:05 ha-401927 crio[642]: time="2024-09-14 01:09:05.536514770Z" level=info msg="Created container 6202a6e3b7b78cb6daf75ccabceee3bff9a01f3fa5d33bcf2b2d504c21d66727: kube-system/kube-controller-manager-ha-401927/kube-controller-manager" id=fa941024-4906-465d-b731-c9b0bfc282ca name=/runtime.v1.RuntimeService/CreateContainer
	Sep 14 01:09:05 ha-401927 crio[642]: time="2024-09-14 01:09:05.537064615Z" level=info msg="Starting container: 6202a6e3b7b78cb6daf75ccabceee3bff9a01f3fa5d33bcf2b2d504c21d66727" id=bafe34a3-ffff-4fab-9982-d52b394c03c7 name=/runtime.v1.RuntimeService/StartContainer
	Sep 14 01:09:05 ha-401927 crio[642]: time="2024-09-14 01:09:05.545796791Z" level=info msg="Started container" PID=1872 containerID=6202a6e3b7b78cb6daf75ccabceee3bff9a01f3fa5d33bcf2b2d504c21d66727 description=kube-system/kube-controller-manager-ha-401927/kube-controller-manager id=bafe34a3-ffff-4fab-9982-d52b394c03c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d096cba2653e53db549559629b1ea2674b881edc7dbbe38da51f1a24c1f4f528
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.114415605Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.119309882Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.119347485Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.119369630Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.122533576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.122571458Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.122592012Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.125775116Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.125809905Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.125825889Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.128808046Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 14 01:09:16 ha-401927 crio[642]: time="2024-09-14 01:09:16.128842187Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6202a6e3b7b78       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   28 seconds ago       Running             kube-controller-manager   8                   d096cba2653e5       kube-controller-manager-ha-401927
	71ef0ec91d294       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   31 seconds ago       Running             storage-provisioner       5                   ef9dbedccb262       storage-provisioner
	2195175e5547c       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   40 seconds ago       Running             kube-vip                  3                   beb3d1ffc5d4e       kube-vip-ha-401927
	bda05a282aa33       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   44 seconds ago       Running             kube-apiserver            4                   e891dbab8b7ae       kube-apiserver-ha-401927
	207e0f0bd7f43       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   57 seconds ago       Running             busybox                   2                   0fbb561c68a09       busybox-7dff88458-t72d5
	9d311295771e1       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   58 seconds ago       Running             coredns                   2                   ae984983edf20       coredns-7c65d6cfc9-ghkt8
	80c95de33d955       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   58 seconds ago       Running             kindnet-cni               2                   b6acbffeabe91       kindnet-wx4k5
	45092c2ce42e8       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   58 seconds ago       Running             kube-proxy                2                   12348b91cc3ac       kube-proxy-dh9sg
	ba611f1b8e767       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   e03ac792c3b74       coredns-7c65d6cfc9-zrv9t
	4821644f301a7       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   About a minute ago   Exited              kube-controller-manager   7                   d096cba2653e5       kube-controller-manager-ha-401927
	e5919bf8e27ca       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       4                   ef9dbedccb262       storage-provisioner
	1756ee47fd6c8       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   About a minute ago   Running             kube-scheduler            2                   8b5011d9e735f       kube-scheduler-ha-401927
	b8eaeb878d6a1       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   About a minute ago   Exited              kube-apiserver            3                   e891dbab8b7ae       kube-apiserver-ha-401927
	3bfaa72c1a913       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   About a minute ago   Exited              kube-vip                  2                   beb3d1ffc5d4e       kube-vip-ha-401927
	a7c90da2e5ab5       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Running             etcd                      2                   35762b6b4e137       etcd-ha-401927
	
	
	==> coredns [9d311295771e19036d29f7e48cb7cf3bfd8ebd306a798b56fcdad42be2200175] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47447 - 37582 "HINFO IN 1497100881870959181.1320449829298516941. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022543683s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[611306097]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 01:08:35.726) (total time: 30001ms):
	Trace[611306097]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:09:05.727)
	Trace[611306097]: [30.001623392s] [30.001623392s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2113006653]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 01:08:35.726) (total time: 30002ms):
	Trace[2113006653]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:09:05.728)
	Trace[2113006653]: [30.00219855s] [30.00219855s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1968313985]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 01:08:35.726) (total time: 30003ms):
	Trace[1968313985]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (01:09:05.730)
	Trace[1968313985]: [30.003823997s] [30.003823997s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [ba611f1b8e76736a5b1de2f2bb177107c8b84fae4683a06768d39495e2074fe9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50885 - 48498 "HINFO IN 122661722938404325.6930344939879080039. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006784169s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[704850411]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 01:08:33.584) (total time: 30000ms):
	Trace[704850411]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (01:09:03.585)
	Trace[704850411]: [30.000645706s] [30.000645706s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1633334430]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 01:08:33.585) (total time: 30000ms):
	Trace[1633334430]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (01:09:03.585)
	Trace[1633334430]: [30.000722382s] [30.000722382s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1574622956]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Sep-2024 01:08:33.585) (total time: 30000ms):
	Trace[1574622956]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (01:09:03.585)
	Trace[1574622956]: [30.000849107s] [30.000849107s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-401927
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-401927
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-401927
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_58_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:58:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-401927
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 01:08:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 14 Sep 2024 01:08:13 +0000   Sat, 14 Sep 2024 01:09:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 14 Sep 2024 01:08:13 +0000   Sat, 14 Sep 2024 01:09:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 14 Sep 2024 01:08:13 +0000   Sat, 14 Sep 2024 01:09:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 14 Sep 2024 01:08:13 +0000   Sat, 14 Sep 2024 01:09:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-401927
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 af637855a1c14da2b3b9a42174c988a2
	  System UUID:                3378a9a0-2cce-4e17-9253-15cf378c202c
	  Boot ID:                    fb6d1488-4ff6-49a9-b7dc-0ab0c636005f
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-t72d5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 coredns-7c65d6cfc9-ghkt8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-7c65d6cfc9-zrv9t             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-ha-401927                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-wx4k5                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-401927             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-401927    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-dh9sg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-401927             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-401927                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   Starting                 4m40s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-401927 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-401927 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-401927 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node ha-401927 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node ha-401927 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node ha-401927 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           10m                    node-controller  Node ha-401927 event: Registered Node ha-401927 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-401927 status is now: NodeReady
	  Normal   RegisteredNode           10m                    node-controller  Node ha-401927 event: Registered Node ha-401927 in Controller
	  Normal   RegisteredNode           8m58s                  node-controller  Node ha-401927 event: Registered Node ha-401927 in Controller
	  Normal   RegisteredNode           6m19s                  node-controller  Node ha-401927 event: Registered Node ha-401927 in Controller
	  Warning  CgroupV1                 5m36s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     5m36s (x7 over 5m36s)  kubelet          Node ha-401927 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node ha-401927 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node ha-401927 status is now: NodeHasSufficientMemory
	  Normal   Starting                 5m36s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m57s                  node-controller  Node ha-401927 event: Registered Node ha-401927 in Controller
	  Normal   RegisteredNode           4m2s                   node-controller  Node ha-401927 event: Registered Node ha-401927 in Controller
	  Normal   NodeNotReady             3m47s                  node-controller  Node ha-401927 status is now: NodeNotReady
	  Normal   RegisteredNode           3m29s                  node-controller  Node ha-401927 event: Registered Node ha-401927 in Controller
	  Normal   Starting                 119s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  119s (x8 over 119s)    kubelet          Node ha-401927 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    119s (x8 over 119s)    kubelet          Node ha-401927 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s (x7 over 119s)    kubelet          Node ha-401927 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                    node-controller  Node ha-401927 event: Registered Node ha-401927 in Controller
	  Normal   RegisteredNode           25s                    node-controller  Node ha-401927 event: Registered Node ha-401927 in Controller
	  Normal   NodeNotReady             7s                     node-controller  Node ha-401927 status is now: NodeNotReady
	
	
	Name:               ha-401927-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-401927-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-401927
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T00_59_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:59:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-401927-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 01:09:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 01:08:14 +0000   Sat, 14 Sep 2024 00:59:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 01:08:14 +0000   Sat, 14 Sep 2024 00:59:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 01:08:14 +0000   Sat, 14 Sep 2024 00:59:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 01:08:14 +0000   Sat, 14 Sep 2024 00:59:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-401927-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 688a8ce1bfcb45d19fef7bc9b80b756d
	  System UUID:                6d1b517c-dec2-416b-8c2c-029ce377bf68
	  Boot ID:                    fb6d1488-4ff6-49a9-b7dc-0ab0c636005f
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kp5pc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 etcd-ha-401927-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-b9pww                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-401927-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-401927-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-vb5lf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-401927-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-401927-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 6m21s                  kube-proxy       
	  Normal   Starting                 4m47s                  kube-proxy       
	  Normal   Starting                 73s                    kube-proxy       
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-401927-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-401927-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-401927-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-401927-m02 event: Registered Node ha-401927-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-401927-m02 event: Registered Node ha-401927-m02 in Controller
	  Normal   RegisteredNode           8m58s                  node-controller  Node ha-401927-m02 event: Registered Node ha-401927-m02 in Controller
	  Normal   Starting                 6m58s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m58s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     6m57s (x7 over 6m58s)  kubelet          Node ha-401927-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m57s (x8 over 6m58s)  kubelet          Node ha-401927-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  6m57s (x8 over 6m58s)  kubelet          Node ha-401927-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           6m19s                  node-controller  Node ha-401927-m02 event: Registered Node ha-401927-m02 in Controller
	  Normal   NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node ha-401927-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m34s (x7 over 5m34s)  kubelet          Node ha-401927-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node ha-401927-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m34s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           4m57s                  node-controller  Node ha-401927-m02 event: Registered Node ha-401927-m02 in Controller
	  Normal   RegisteredNode           4m2s                   node-controller  Node ha-401927-m02 event: Registered Node ha-401927-m02 in Controller
	  Normal   RegisteredNode           3m29s                  node-controller  Node ha-401927-m02 event: Registered Node ha-401927-m02 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-401927-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-401927-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node ha-401927-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                    node-controller  Node ha-401927-m02 event: Registered Node ha-401927-m02 in Controller
	  Normal   RegisteredNode           25s                    node-controller  Node ha-401927-m02 event: Registered Node ha-401927-m02 in Controller
	
	
	Name:               ha-401927-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-401927-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=ha-401927
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_14T01_01_43_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 01:01:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-401927-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 01:09:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 01:09:12 +0000   Sat, 14 Sep 2024 01:09:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 01:09:12 +0000   Sat, 14 Sep 2024 01:09:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 01:09:12 +0000   Sat, 14 Sep 2024 01:09:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 01:09:12 +0000   Sat, 14 Sep 2024 01:09:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-401927-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cdf29151a214043a10d945543ad70f0
	  System UUID:                24cf91ca-e789-426c-8236-61342a967be8
	  Boot ID:                    fb6d1488-4ff6-49a9-b7dc-0ab0c636005f
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-89rjr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 kindnet-2sh8d              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m52s
	  kube-system                 kube-proxy-bx82b           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m58s                  kube-proxy       
	  Normal   Starting                 16s                    kube-proxy       
	  Normal   Starting                 7m49s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    7m52s (x2 over 7m52s)  kubelet          Node ha-401927-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m52s (x2 over 7m52s)  kubelet          Node ha-401927-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  7m52s (x2 over 7m52s)  kubelet          Node ha-401927-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m49s                  node-controller  Node ha-401927-m04 event: Registered Node ha-401927-m04 in Controller
	  Normal   RegisteredNode           7m48s                  node-controller  Node ha-401927-m04 event: Registered Node ha-401927-m04 in Controller
	  Normal   RegisteredNode           7m48s                  node-controller  Node ha-401927-m04 event: Registered Node ha-401927-m04 in Controller
	  Normal   NodeReady                7m39s                  kubelet          Node ha-401927-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m19s                  node-controller  Node ha-401927-m04 event: Registered Node ha-401927-m04 in Controller
	  Normal   RegisteredNode           4m57s                  node-controller  Node ha-401927-m04 event: Registered Node ha-401927-m04 in Controller
	  Normal   NodeNotReady             4m17s                  node-controller  Node ha-401927-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m2s                   node-controller  Node ha-401927-m04 event: Registered Node ha-401927-m04 in Controller
	  Normal   RegisteredNode           3m29s                  node-controller  Node ha-401927-m04 event: Registered Node ha-401927-m04 in Controller
	  Normal   Starting                 3m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m16s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m10s (x7 over 3m16s)  kubelet          Node ha-401927-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    3m3s (x8 over 3m16s)   kubelet          Node ha-401927-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  3m3s (x8 over 3m16s)   kubelet          Node ha-401927-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           77s                    node-controller  Node ha-401927-m04 event: Registered Node ha-401927-m04 in Controller
	  Normal   NodeNotReady             37s                    node-controller  Node ha-401927-m04 status is now: NodeNotReady
	  Normal   Starting                 35s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     29s (x7 over 35s)      kubelet          Node ha-401927-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           25s                    node-controller  Node ha-401927-m04 event: Registered Node ha-401927-m04 in Controller
	  Normal   NodeHasNoDiskPressure    22s (x8 over 35s)      kubelet          Node ha-401927-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  22s (x8 over 35s)      kubelet          Node ha-401927-m04 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	
	
	==> etcd [a7c90da2e5ab5b26a2e2b46c68d92bd337daa561dd45f5d5f3b2645c070dcae5] <==
	{"level":"warn","ts":"2024-09-14T01:08:04.280671Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.359140Z","time spent":"2.921526164s","remote":"127.0.0.1:34284","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":2,"response size":7206,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.281411Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.357506Z","time spent":"2.923888562s","remote":"127.0.0.1:34316","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":2,"response size":5925,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283410Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.190262Z","time spent":"3.093129566s","remote":"127.0.0.1:34366","response type":"/etcdserverpb.KV/Range","request count":0,"request size":97,"response count":21,"response size":20214,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" limit:500 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283436Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.189023Z","time spent":"3.094407206s","remote":"127.0.0.1:34196","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":1016,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283463Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.164157Z","time spent":"3.119288894s","remote":"127.0.0.1:34162","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":12,"response size":8719,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283479Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.384857Z","time spent":"2.898616501s","remote":"127.0.0.1:34240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":13,"response size":14421,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283493Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.356599Z","time spent":"2.926889575s","remote":"127.0.0.1:34300","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":2,"response size":7610,"request content":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283507Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.350008Z","time spent":"2.933493923s","remote":"127.0.0.1:34366","response type":"/etcdserverpb.KV/Range","request count":0,"request size":97,"response count":21,"response size":20214,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283524Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.150720Z","time spent":"3.132796998s","remote":"127.0.0.1:42392","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":3,"response size":18245,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283549Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:00.816532Z","time spent":"3.467011514s","remote":"127.0.0.1:34182","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":466,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283565Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.384827Z","time spent":"2.898732223s","remote":"127.0.0.1:34252","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":8,"response size":5443,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283579Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.383502Z","time spent":"2.900072064s","remote":"127.0.0.1:42398","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":29,"response size":155759,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283594Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.370300Z","time spent":"2.913288172s","remote":"127.0.0.1:34178","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":55,"response size":39217,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283616Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.383398Z","time spent":"2.900205329s","remote":"127.0.0.1:42390","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":3,"response size":2490,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283630Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.381245Z","time spent":"2.902380638s","remote":"127.0.0.1:42392","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":3,"response size":18245,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283643Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.379511Z","time spent":"2.90412801s","remote":"127.0.0.1:42404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":2,"response size":1932,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.283658Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.379462Z","time spent":"2.904189801s","remote":"127.0.0.1:42408","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":42,"response size":9241,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.284661Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:00.538927Z","time spent":"3.745719603s","remote":"127.0.0.1:42488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":712,"request content":"key:\"/registry/leases/kube-system/apiserver-tt5w7weus3qidqz2p3c3fnhgk4\" "}
	{"level":"warn","ts":"2024-09-14T01:08:04.286329Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.395708Z","time spent":"2.890606785s","remote":"127.0.0.1:42404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":2,"response size":1932,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.286512Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.388879Z","time spent":"2.897621021s","remote":"127.0.0.1:42314","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":11,"response size":18824,"request content":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:04.286532Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-14T01:08:01.387996Z","time spent":"2.898530611s","remote":"127.0.0.1:42326","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":4,"response size":1407,"request content":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-14T01:08:20.099737Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.30132ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-ghkt8\" ","response":"range_response_count:1 size:5137"}
	{"level":"info","ts":"2024-09-14T01:08:20.099893Z","caller":"traceutil/trace.go:171","msg":"trace[28292146] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-ghkt8; range_end:; response_count:1; response_revision:2836; }","duration":"100.466599ms","start":"2024-09-14T01:08:19.999411Z","end":"2024-09-14T01:08:20.099878Z","steps":["trace[28292146] 'agreement among raft nodes before linearized reading'  (duration: 100.117783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-14T01:08:20.116086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.631522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-ha-401927-m02\" ","response":"range_response_count:1 size:4463"}
	{"level":"info","ts":"2024-09-14T01:08:20.116143Z","caller":"traceutil/trace.go:171","msg":"trace[783408862] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-ha-401927-m02; range_end:; response_count:1; response_revision:2837; }","duration":"103.700977ms","start":"2024-09-14T01:08:20.012430Z","end":"2024-09-14T01:08:20.116131Z","steps":["trace[783408862] 'agreement among raft nodes before linearized reading'  (duration: 103.546897ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:09:34 up  4:51,  0 users,  load average: 3.06, 2.49, 2.05
	Linux ha-401927 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [80c95de33d95501088334a29de5f9e43fe52b1e7d5ef2d493932f34bf91ca8f7] <==
	Trace[655577963]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (01:09:06.116)
	Trace[655577963]: [30.002580772s] [30.002580772s] END
	E0914 01:09:06.118742       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	W0914 01:09:06.116158       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0914 01:09:06.118938       1 trace.go:236] Trace[1009787584]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (14-Sep-2024 01:08:36.114) (total time: 30004ms):
	Trace[1009787584]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:09:06.116)
	Trace[1009787584]: [30.004706498s] [30.004706498s] END
	E0914 01:09:06.119117       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0914 01:09:07.515805       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0914 01:09:07.515836       1 metrics.go:61] Registering metrics
	I0914 01:09:07.515878       1 controller.go:374] Syncing nftables rules
	I0914 01:09:16.114093       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 01:09:16.114179       1 main.go:299] handling current node
	I0914 01:09:16.118892       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0914 01:09:16.118927       1 main.go:322] Node ha-401927-m02 has CIDR [10.244.1.0/24] 
	I0914 01:09:16.119063       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0914 01:09:16.119135       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0914 01:09:16.119143       1 main.go:322] Node ha-401927-m04 has CIDR [10.244.3.0/24] 
	I0914 01:09:16.119185       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0914 01:09:26.119804       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 01:09:26.119862       1 main.go:299] handling current node
	I0914 01:09:26.119878       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0914 01:09:26.119884       1 main.go:322] Node ha-401927-m02 has CIDR [10.244.1.0/24] 
	I0914 01:09:26.120013       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0914 01:09:26.120027       1 main.go:322] Node ha-401927-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b8eaeb878d6a1fef3695ee63fb7ab3e5e6d0093e5c8623f5a4b43577fd2fc7cb] <==
	E0914 01:08:04.253388       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0914 01:08:04.253586       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0914 01:08:04.254529       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0914 01:08:04.254793       1 watcher.go:342] watch chan error: etcdserver: no leader
	W0914 01:08:04.257544       1 reflector.go:561] storage/cacher.go:/secrets: failed to list *core.Secret: etcdserver: leader changed
	E0914 01:08:04.257913       1 cacher.go:478] cacher (secrets): unexpected ListAndWatch error: failed to list *core.Secret: etcdserver: leader changed; reinitializing...
	I0914 01:08:04.399204       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 01:08:04.405822       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 01:08:04.405917       1 policy_source.go:224] refreshing policies
	W0914 01:08:04.425935       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0914 01:08:04.427383       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 01:08:04.432887       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 01:08:04.433325       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0914 01:08:04.433502       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 01:08:04.441057       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 01:08:04.450224       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 01:08:04.454473       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 01:08:04.454570       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 01:08:04.454615       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 01:08:04.454660       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0914 01:08:04.456599       1 cache.go:39] Caches are synced for autoregister controller
	E0914 01:08:04.458911       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0914 01:08:04.475426       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 01:08:05.348532       1 shared_informer.go:320] Caches are synced for configmaps
	F0914 01:08:48.815091       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [bda05a282aa339c1e30b2cef7b57cd62fa0339aa924b115fbfa2104c28306d84] <==
	I0914 01:08:52.842055       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0914 01:08:52.842090       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0914 01:08:52.818031       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0914 01:08:52.843738       1 aggregator.go:171] initial CRD sync complete...
	I0914 01:08:52.843779       1 autoregister_controller.go:144] Starting autoregister controller
	I0914 01:08:52.843810       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0914 01:08:52.843840       1 cache.go:39] Caches are synced for autoregister controller
	I0914 01:08:52.911314       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0914 01:08:52.911422       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0914 01:08:52.935363       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0914 01:08:52.935485       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 01:08:52.947891       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0914 01:08:52.954457       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0914 01:08:52.966749       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 01:08:52.966940       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 01:08:52.967090       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I0914 01:08:52.967133       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0914 01:08:52.967420       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0914 01:08:52.967468       1 policy_source.go:224] refreshing policies
	I0914 01:08:53.007289       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 01:08:53.068048       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0914 01:08:53.723818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0914 01:08:54.112388       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0914 01:08:54.114833       1 controller.go:615] quota admission added evaluator for: endpoints
	I0914 01:08:54.123812       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4821644f301a748e6603df5aaaaba3a6f5e86175f72a7750c903431cf8b6a420] <==
	I0914 01:08:33.229754       1 serving.go:386] Generated self-signed cert in-memory
	I0914 01:08:33.993669       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0914 01:08:33.993695       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:08:33.995088       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0914 01:08:33.995271       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0914 01:08:33.995405       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0914 01:08:33.995490       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0914 01:08:44.016702       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [6202a6e3b7b78cb6daf75ccabceee3bff9a01f3fa5d33bcf2b2d504c21d66727] <==
	I0914 01:09:09.272143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-401927-m02"
	I0914 01:09:09.272152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-401927-m04"
	I0914 01:09:09.350907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-401927-m04"
	I0914 01:09:09.413030       1 shared_informer.go:320] Caches are synced for stateful set
	I0914 01:09:09.463581       1 shared_informer.go:320] Caches are synced for disruption
	I0914 01:09:09.471890       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 01:09:09.497305       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 01:09:09.512863       1 shared_informer.go:320] Caches are synced for deployment
	I0914 01:09:09.905836       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 01:09:09.956391       1 shared_informer.go:320] Caches are synced for garbage collector
	I0914 01:09:09.956493       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0914 01:09:12.195225       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-401927-m04"
	I0914 01:09:12.195378       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-401927-m04"
	I0914 01:09:12.212563       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-401927-m04"
	I0914 01:09:14.286287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-401927-m04"
	I0914 01:09:17.328446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.072µs"
	I0914 01:09:18.480015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.484687ms"
	I0914 01:09:18.480086       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.901µs"
	I0914 01:09:27.116663       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-401927-m04"
	I0914 01:09:27.116720       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-401927"
	I0914 01:09:27.138665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-401927"
	I0914 01:09:27.283235       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.738716ms"
	I0914 01:09:27.283528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="126.52µs"
	I0914 01:09:29.389935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-401927"
	I0914 01:09:32.423984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-401927"
	
	
	==> kube-proxy [45092c2ce42e8b44c25cf15b5fd7e21bea933f819f7b33a2e85a84879d239622] <==
	I0914 01:08:35.707934       1 server_linux.go:66] "Using iptables proxy"
	I0914 01:08:35.812049       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0914 01:08:35.812126       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 01:08:35.857377       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 01:08:35.857436       1 server_linux.go:169] "Using iptables Proxier"
	I0914 01:08:35.862543       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 01:08:35.862884       1 server.go:483] "Version info" version="v1.31.1"
	I0914 01:08:35.862907       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 01:08:35.867031       1 config.go:199] "Starting service config controller"
	I0914 01:08:35.867079       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 01:08:35.867111       1 config.go:105] "Starting endpoint slice config controller"
	I0914 01:08:35.867129       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 01:08:35.867818       1 config.go:328] "Starting node config controller"
	I0914 01:08:35.867836       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 01:08:35.968125       1 shared_informer.go:320] Caches are synced for node config
	I0914 01:08:35.968174       1 shared_informer.go:320] Caches are synced for service config
	I0914 01:08:35.968213       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1756ee47fd6c82b818ac4e0cf54e50e8dc063899d604835a08e40a831188a680] <==
	W0914 01:07:59.542273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 01:07:59.542318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 01:08:03.379563       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 01:08:03.379610       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 01:08:03.796240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 01:08:03.796287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 01:08:03.826930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 01:08:03.826974       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 01:08:03.832980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 01:08:03.833032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0914 01:08:17.738840       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0914 01:08:52.753014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:56080->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.753206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:56168->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.753579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:56160->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.753704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:56156->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.753939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:56152->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.754052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:56138->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.754181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:56134->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.754293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:56132->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.754411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:56126->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.754509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:56118->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.754617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:56110->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.754732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:56106->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.754832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:56090->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0914 01:08:52.754925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:56084->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 14 01:08:45 ha-401927 kubelet[757]: E0914 01:08:45.645669     757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-401927_kube-system(e9a210669749aa997a43b61e5dd2bee2)\"" pod="kube-system/kube-controller-manager-ha-401927" podUID="e9a210669749aa997a43b61e5dd2bee2"
	Sep 14 01:08:49 ha-401927 kubelet[757]: I0914 01:08:49.654657     757 scope.go:117] "RemoveContainer" containerID="b8eaeb878d6a1fef3695ee63fb7ab3e5e6d0093e5c8623f5a4b43577fd2fc7cb"
	Sep 14 01:08:49 ha-401927 kubelet[757]: I0914 01:08:49.655278     757 status_manager.go:851] "Failed to get status for pod" podUID="303719b2ce951f9abb65484c4fe96777" pod="kube-system/kube-apiserver-ha-401927" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-401927\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Sep 14 01:08:49 ha-401927 kubelet[757]: E0914 01:08:49.656444     757 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-401927.17f4f6eea537378c\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-401927.17f4f6eea537378c  kube-system   2777 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-401927,UID:303719b2ce951f9abb65484c4fe96777,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.1\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-401927,},FirstTimestamp:2024-09-14 01:07:42 +0000 UTC,LastTimestamp:2024-09-14 01:08:49.655705143 +0000 UTC m=+74.316897154,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-401927,}"
	Sep 14 01:08:50 ha-401927 kubelet[757]: I0914 01:08:50.591344     757 scope.go:117] "RemoveContainer" containerID="4821644f301a748e6603df5aaaaba3a6f5e86175f72a7750c903431cf8b6a420"
	Sep 14 01:08:50 ha-401927 kubelet[757]: E0914 01:08:50.591721     757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-401927_kube-system(e9a210669749aa997a43b61e5dd2bee2)\"" pod="kube-system/kube-controller-manager-ha-401927" podUID="e9a210669749aa997a43b61e5dd2bee2"
	Sep 14 01:08:52 ha-401927 kubelet[757]: E0914 01:08:52.793860     757 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:43624->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 14 01:08:52 ha-401927 kubelet[757]: E0914 01:08:52.793967     757 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:43650->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 14 01:08:52 ha-401927 kubelet[757]: E0914 01:08:52.794008     757 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:43642->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 14 01:08:52 ha-401927 kubelet[757]: E0914 01:08:52.794040     757 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:43680->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 14 01:08:53 ha-401927 kubelet[757]: I0914 01:08:53.663894     757 scope.go:117] "RemoveContainer" containerID="3bfaa72c1a91309e66d09d8b943246e6b854c20197e1b4f447faf1afbda87a8f"
	Sep 14 01:08:55 ha-401927 kubelet[757]: E0914 01:08:55.558875     757 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276135558696415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:08:55 ha-401927 kubelet[757]: E0914 01:08:55.558912     757 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276135558696415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:09:02 ha-401927 kubelet[757]: I0914 01:09:02.684741     757 scope.go:117] "RemoveContainer" containerID="e5919bf8e27ca89adb7cbe3a171c06f1823d27d53d69a9d582f3ddc499480ddb"
	Sep 14 01:09:04 ha-401927 kubelet[757]: E0914 01:09:04.372409     757 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-401927?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 14 01:09:05 ha-401927 kubelet[757]: I0914 01:09:05.453188     757 scope.go:117] "RemoveContainer" containerID="4821644f301a748e6603df5aaaaba3a6f5e86175f72a7750c903431cf8b6a420"
	Sep 14 01:09:05 ha-401927 kubelet[757]: E0914 01:09:05.562948     757 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276145562681924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:09:05 ha-401927 kubelet[757]: E0914 01:09:05.562980     757 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276145562681924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:09:14 ha-401927 kubelet[757]: E0914 01:09:14.373411     757 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-401927?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
	Sep 14 01:09:15 ha-401927 kubelet[757]: E0914 01:09:15.564427     757 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276155564106849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:09:15 ha-401927 kubelet[757]: E0914 01:09:15.564466     757 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276155564106849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:09:24 ha-401927 kubelet[757]: E0914 01:09:24.374107     757 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-401927?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
	Sep 14 01:09:25 ha-401927 kubelet[757]: E0914 01:09:25.565793     757 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276165565611471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:09:25 ha-401927 kubelet[757]: E0914 01:09:25.565827     757 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726276165565611471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 14 01:09:34 ha-401927 kubelet[757]: E0914 01:09:34.374459     757 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-401927?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-401927 -n ha-401927
helpers_test.go:261: (dbg) Run:  kubectl --context ha-401927 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (128.43s)

                                                
                                    

Test pass (294/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.01
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.2
9 TestDownloadOnly/v1.20.0/DeleteAll 0.36
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.31.1/json-events 6.07
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 214.79
31 TestAddons/serial/GCPAuth/Namespaces 0.22
35 TestAddons/parallel/InspektorGadget 11.76
39 TestAddons/parallel/CSI 52.88
40 TestAddons/parallel/Headlamp 15.76
41 TestAddons/parallel/CloudSpanner 5.78
42 TestAddons/parallel/LocalPath 8.57
43 TestAddons/parallel/NvidiaDevicePlugin 6.76
44 TestAddons/parallel/Yakd 11.72
45 TestAddons/StoppedEnableDisable 12.15
46 TestCertOptions 35.09
47 TestCertExpiration 234.41
49 TestForceSystemdFlag 41
50 TestForceSystemdEnv 32.51
56 TestErrorSpam/setup 30.29
57 TestErrorSpam/start 0.76
58 TestErrorSpam/status 1.06
59 TestErrorSpam/pause 1.79
60 TestErrorSpam/unpause 1.83
61 TestErrorSpam/stop 1.44
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 49.8
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 26.84
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.32
73 TestFunctional/serial/CacheCmd/cache/add_local 1.44
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 37.28
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.72
84 TestFunctional/serial/LogsFileCmd 1.77
85 TestFunctional/serial/InvalidService 4.23
87 TestFunctional/parallel/ConfigCmd 0.49
88 TestFunctional/parallel/DashboardCmd 8.15
89 TestFunctional/parallel/DryRun 0.43
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 0.99
95 TestFunctional/parallel/ServiceCmdConnect 10.58
96 TestFunctional/parallel/AddonsCmd 0.18
97 TestFunctional/parallel/PersistentVolumeClaim 26.13
99 TestFunctional/parallel/SSHCmd 0.65
100 TestFunctional/parallel/CpCmd 2.28
102 TestFunctional/parallel/FileSync 0.32
103 TestFunctional/parallel/CertSync 2.07
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
111 TestFunctional/parallel/License 0.77
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.45
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
125 TestFunctional/parallel/ProfileCmd/profile_list 0.37
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
127 TestFunctional/parallel/MountCmd/any-port 10.37
128 TestFunctional/parallel/ServiceCmd/List 0.61
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
131 TestFunctional/parallel/ServiceCmd/Format 0.37
132 TestFunctional/parallel/ServiceCmd/URL 0.37
133 TestFunctional/parallel/MountCmd/specific-port 2.32
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.23
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.09
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.4
142 TestFunctional/parallel/ImageCommands/Setup 0.76
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.67
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 173.67
160 TestMultiControlPlane/serial/DeployApp 8.77
161 TestMultiControlPlane/serial/PingHostFromPods 1.62
162 TestMultiControlPlane/serial/AddWorkerNode 35.65
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.74
165 TestMultiControlPlane/serial/CopyFile 18.44
166 TestMultiControlPlane/serial/StopSecondaryNode 12.82
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 32.31
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.43
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 205.23
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.44
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
173 TestMultiControlPlane/serial/StopCluster 35.81
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
176 TestMultiControlPlane/serial/AddSecondaryNode 69.39
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
181 TestJSONOutput/start/Command 78.35
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.77
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.66
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.88
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 37.27
207 TestKicCustomNetwork/use_default_bridge_network 36.26
208 TestKicExistingNetwork 33.56
209 TestKicCustomSubnet 35.91
210 TestKicStaticIP 32.07
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 69.21
215 TestMountStart/serial/StartWithMountFirst 6.72
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 6.82
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.61
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.97
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 103.21
227 TestMultiNode/serial/DeployApp2Nodes 6.95
228 TestMultiNode/serial/PingHostFrom2Pods 0.98
229 TestMultiNode/serial/AddNode 30.61
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.31
232 TestMultiNode/serial/CopyFile 9.65
233 TestMultiNode/serial/StopNode 2.2
234 TestMultiNode/serial/StartAfterStop 10.01
235 TestMultiNode/serial/RestartKeepsNodes 103.4
236 TestMultiNode/serial/DeleteNode 5.54
237 TestMultiNode/serial/StopMultiNode 23.88
238 TestMultiNode/serial/RestartMultiNode 54.95
239 TestMultiNode/serial/ValidateNameConflict 36.79
244 TestPreload 133.58
246 TestScheduledStopUnix 104.66
249 TestInsufficientStorage 11.24
250 TestRunningBinaryUpgrade 85.14
252 TestKubernetesUpgrade 473.97
253 TestMissingContainerUpgrade 166.28
255 TestPause/serial/Start 59.55
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 39.27
259 TestNoKubernetes/serial/StartWithStopK8s 19.59
260 TestNoKubernetes/serial/Start 6.61
261 TestPause/serial/SecondStartNoReconfiguration 26.72
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
263 TestNoKubernetes/serial/ProfileList 0.85
264 TestNoKubernetes/serial/Stop 1.23
265 TestNoKubernetes/serial/StartNoArgs 8.29
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
267 TestPause/serial/Pause 0.77
268 TestPause/serial/VerifyStatus 0.38
269 TestPause/serial/Unpause 0.91
270 TestPause/serial/PauseAgain 1.06
271 TestPause/serial/DeletePaused 2.71
272 TestPause/serial/VerifyDeletedResources 0.12
273 TestStoppedBinaryUpgrade/Setup 1.22
274 TestStoppedBinaryUpgrade/Upgrade 84.52
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
290 TestNetworkPlugins/group/false 3.58
295 TestStartStop/group/old-k8s-version/serial/FirstStart 183.14
297 TestStartStop/group/no-preload/serial/FirstStart 64.78
298 TestStartStop/group/old-k8s-version/serial/DeployApp 11.81
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.27
300 TestStartStop/group/old-k8s-version/serial/Stop 12.06
301 TestStartStop/group/no-preload/serial/DeployApp 10.32
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
303 TestStartStop/group/old-k8s-version/serial/SecondStart 148.08
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
305 TestStartStop/group/no-preload/serial/Stop 12.09
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/no-preload/serial/SecondStart 335.65
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
311 TestStartStop/group/old-k8s-version/serial/Pause 2.93
313 TestStartStop/group/embed-certs/serial/FirstStart 50.35
314 TestStartStop/group/embed-certs/serial/DeployApp 10.49
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.31
316 TestStartStop/group/embed-certs/serial/Stop 12.06
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
318 TestStartStop/group/embed-certs/serial/SecondStart 267.37
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
322 TestStartStop/group/no-preload/serial/Pause 3.12
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.96
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.37
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 296.4
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
333 TestStartStop/group/embed-certs/serial/Pause 3.18
335 TestStartStop/group/newest-cni/serial/FirstStart 32.55
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
338 TestStartStop/group/newest-cni/serial/Stop 1.23
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
340 TestStartStop/group/newest-cni/serial/SecondStart 15.44
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
344 TestStartStop/group/newest-cni/serial/Pause 3.09
345 TestNetworkPlugins/group/auto/Start 48.5
346 TestNetworkPlugins/group/auto/KubeletFlags 0.28
347 TestNetworkPlugins/group/auto/NetCatPod 10.28
348 TestNetworkPlugins/group/auto/DNS 0.19
349 TestNetworkPlugins/group/auto/Localhost 0.17
350 TestNetworkPlugins/group/auto/HairPin 0.16
351 TestNetworkPlugins/group/flannel/Start 55.66
352 TestNetworkPlugins/group/flannel/ControllerPod 6.01
353 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
354 TestNetworkPlugins/group/flannel/NetCatPod 11.24
355 TestNetworkPlugins/group/flannel/DNS 0.26
356 TestNetworkPlugins/group/flannel/Localhost 0.21
357 TestNetworkPlugins/group/flannel/HairPin 0.27
358 TestNetworkPlugins/group/calico/Start 70.03
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
362 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.46
363 TestNetworkPlugins/group/custom-flannel/Start 63.23
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.45
366 TestNetworkPlugins/group/calico/NetCatPod 12.38
367 TestNetworkPlugins/group/calico/DNS 0.19
368 TestNetworkPlugins/group/calico/Localhost 0.18
369 TestNetworkPlugins/group/calico/HairPin 0.17
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.38
372 TestNetworkPlugins/group/custom-flannel/DNS 0.23
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
375 TestNetworkPlugins/group/kindnet/Start 89.42
376 TestNetworkPlugins/group/bridge/Start 77.24
377 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
378 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
379 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
380 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
381 TestNetworkPlugins/group/bridge/NetCatPod 10.29
382 TestNetworkPlugins/group/kindnet/DNS 0.18
383 TestNetworkPlugins/group/kindnet/Localhost 0.17
384 TestNetworkPlugins/group/kindnet/HairPin 0.16
385 TestNetworkPlugins/group/bridge/DNS 0.17
386 TestNetworkPlugins/group/bridge/Localhost 0.15
387 TestNetworkPlugins/group/bridge/HairPin 0.16
388 TestNetworkPlugins/group/enable-default-cni/Start 68.79
389 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
390 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
391 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
392 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
393 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (10.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-116392 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-116392 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.010878238s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-116392
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-116392: exit status 85 (203.600006ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-116392 | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |          |
	|         | -p download-only-116392        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:35:08
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:35:08.895684  874084 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:35:08.895904  874084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:08.895931  874084 out.go:358] Setting ErrFile to fd 2...
	I0914 00:35:08.895950  874084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:08.896332  874084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	W0914 00:35:08.896569  874084 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19640-868698/.minikube/config/config.json: open /home/jenkins/minikube-integration/19640-868698/.minikube/config/config.json: no such file or directory
	I0914 00:35:08.897400  874084 out.go:352] Setting JSON to true
	I0914 00:35:08.898424  874084 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15453,"bootTime":1726258656,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0914 00:35:08.898550  874084 start.go:139] virtualization:  
	I0914 00:35:08.901056  874084 out.go:97] [download-only-116392] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0914 00:35:08.901399  874084 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 00:35:08.901433  874084 notify.go:220] Checking for updates...
	I0914 00:35:08.902397  874084 out.go:169] MINIKUBE_LOCATION=19640
	I0914 00:35:08.903715  874084 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:35:08.904893  874084 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 00:35:08.906024  874084 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	I0914 00:35:08.907349  874084 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 00:35:08.909700  874084 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 00:35:08.909963  874084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:35:08.933175  874084 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:35:08.933323  874084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:09.013515  874084 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 00:35:09.002238077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:09.013638  874084 docker.go:318] overlay module found
	I0914 00:35:09.014926  874084 out.go:97] Using the docker driver based on user configuration
	I0914 00:35:09.014964  874084 start.go:297] selected driver: docker
	I0914 00:35:09.014972  874084 start.go:901] validating driver "docker" against <nil>
	I0914 00:35:09.015092  874084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:09.070950  874084 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 00:35:09.061656685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:09.071157  874084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:35:09.071445  874084 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0914 00:35:09.071610  874084 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 00:35:09.073320  874084 out.go:169] Using Docker driver with root privileges
	I0914 00:35:09.074765  874084 cni.go:84] Creating CNI manager for ""
	I0914 00:35:09.074830  874084 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:35:09.074846  874084 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 00:35:09.074927  874084 start.go:340] cluster config:
	{Name:download-only-116392 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-116392 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:35:09.076550  874084 out.go:97] Starting "download-only-116392" primary control-plane node in "download-only-116392" cluster
	I0914 00:35:09.076575  874084 cache.go:121] Beginning downloading kic base image for docker with crio
	I0914 00:35:09.077971  874084 out.go:97] Pulling base image v0.0.45-1726243947-19640 ...
	I0914 00:35:09.078002  874084 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 00:35:09.078128  874084 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 00:35:09.094176  874084 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 00:35:09.094375  874084 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 00:35:09.094480  874084 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 00:35:09.143367  874084 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0914 00:35:09.143394  874084 cache.go:56] Caching tarball of preloaded images
	I0914 00:35:09.144311  874084 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0914 00:35:09.145758  874084 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0914 00:35:09.145777  874084 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0914 00:35:09.227998  874084 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0914 00:35:13.666548  874084 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0914 00:35:13.666655  874084 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-116392 host does not exist
	  To start a cluster, run: "minikube start -p download-only-116392"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-116392
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-396021 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-396021 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.07032131s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-396021
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-396021: exit status 85 (65.655446ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-116392 | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | -p download-only-116392        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| delete  | -p download-only-116392        | download-only-116392 | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC | 14 Sep 24 00:35 UTC |
	| start   | -o=json --download-only        | download-only-396021 | jenkins | v1.34.0 | 14 Sep 24 00:35 UTC |                     |
	|         | -p download-only-396021        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:35:19
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:35:19.691380  874288 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:35:19.691604  874288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:19.691631  874288 out.go:358] Setting ErrFile to fd 2...
	I0914 00:35:19.691652  874288 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:19.691950  874288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 00:35:19.692462  874288 out.go:352] Setting JSON to true
	I0914 00:35:19.694152  874288 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15464,"bootTime":1726258656,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0914 00:35:19.694269  874288 start.go:139] virtualization:  
	I0914 00:35:19.720942  874288 out.go:97] [download-only-396021] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 00:35:19.721103  874288 notify.go:220] Checking for updates...
	I0914 00:35:19.736369  874288 out.go:169] MINIKUBE_LOCATION=19640
	I0914 00:35:19.769388  874288 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:35:19.800926  874288 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 00:35:19.833292  874288 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	I0914 00:35:19.864303  874288 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 00:35:19.945148  874288 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 00:35:19.945494  874288 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:35:19.967523  874288 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:35:19.967630  874288 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:20.022706  874288 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:35:20.011545723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:20.022895  874288 docker.go:318] overlay module found
	I0914 00:35:20.042406  874288 out.go:97] Using the docker driver based on user configuration
	I0914 00:35:20.042446  874288 start.go:297] selected driver: docker
	I0914 00:35:20.042454  874288 start.go:901] validating driver "docker" against <nil>
	I0914 00:35:20.042585  874288 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:20.100680  874288 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:35:20.089731342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:20.100905  874288 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:35:20.101240  874288 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0914 00:35:20.101487  874288 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 00:35:20.137076  874288 out.go:169] Using Docker driver with root privileges
	I0914 00:35:20.168645  874288 cni.go:84] Creating CNI manager for ""
	I0914 00:35:20.168740  874288 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0914 00:35:20.168762  874288 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 00:35:20.169430  874288 start.go:340] cluster config:
	{Name:download-only-396021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-396021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:35:20.202313  874288 out.go:97] Starting "download-only-396021" primary control-plane node in "download-only-396021" cluster
	I0914 00:35:20.202356  874288 cache.go:121] Beginning downloading kic base image for docker with crio
	I0914 00:35:20.234141  874288 out.go:97] Pulling base image v0.0.45-1726243947-19640 ...
	I0914 00:35:20.234225  874288 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 00:35:20.234249  874288 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:20.249332  874288 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 00:35:20.249479  874288 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 00:35:20.249503  874288 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 00:35:20.249508  874288 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 00:35:20.249516  874288 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 00:35:20.321858  874288 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0914 00:35:20.322777  874288 cache.go:56] Caching tarball of preloaded images
	I0914 00:35:20.322948  874288 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0914 00:35:20.363889  874288 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0914 00:35:20.363922  874288 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I0914 00:35:20.446446  874288 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0914 00:35:23.905017  874288 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I0914 00:35:23.905127  874288 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-868698/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-396021 host does not exist
	  To start a cluster, run: "minikube start -p download-only-396021"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-396021
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-918324 --alsologtostderr --binary-mirror http://127.0.0.1:44679 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-918324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-918324
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-885748
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-885748: exit status 85 (85.028359ms)

                                                
                                                
-- stdout --
	* Profile "addons-885748" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-885748"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-885748
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-885748: exit status 85 (73.122982ms)

                                                
                                                
-- stdout --
	* Profile "addons-885748" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-885748"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (214.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-885748 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-885748 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m34.793078533s)
--- PASS: TestAddons/Setup (214.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-885748 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-885748 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9rb75" [cc7ac5c3-576d-470e-abf9-04c372399c72] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003840261s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-885748
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-885748: (5.754783939s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.493334ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-885748 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-885748 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3e3f752f-be34-4ec9-aab3-887929be2cd8] Pending
helpers_test.go:344: "task-pv-pod" [3e3f752f-be34-4ec9-aab3-887929be2cd8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3e3f752f-be34-4ec9-aab3-887929be2cd8] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004555732s
addons_test.go:590: (dbg) Run:  kubectl --context addons-885748 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-885748 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-885748 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-885748 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-885748 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-885748 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-885748 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7bbbe26c-76e6-4a0d-b9de-0f03bbbe870a] Pending
helpers_test.go:344: "task-pv-pod-restore" [7bbbe26c-76e6-4a0d-b9de-0f03bbbe870a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7bbbe26c-76e6-4a0d-b9de-0f03bbbe870a] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003416929s
addons_test.go:632: (dbg) Run:  kubectl --context addons-885748 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-885748 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-885748 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-885748 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.777068602s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-885748 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-885748 --alsologtostderr -v=1: (1.0682063s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-dc2kz" [f6f22df2-6f5a-42ee-a32d-d074dee33139] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-dc2kz" [f6f22df2-6f5a-42ee-a32d-d074dee33139] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-dc2kz" [f6f22df2-6f5a-42ee-a32d-d074dee33139] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004039381s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-885748 addons disable headlamp --alsologtostderr -v=1: (5.68489451s)
--- PASS: TestAddons/parallel/Headlamp (15.76s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-qnlnm" [6d845354-a0c4-4564-a1d3-85ecc12d36e1] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003949962s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-885748
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.57s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-885748 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-885748 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885748 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [33379494-4aeb-4393-87ed-621d24ab2dc4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [33379494-4aeb-4393-87ed-621d24ab2dc4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [33379494-4aeb-4393-87ed-621d24ab2dc4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003803748s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-885748 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 ssh "cat /opt/local-path-provisioner/pvc-e2f482dc-2720-407b-92e8-aef35ea0dba3_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-885748 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-885748 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9nphx" [8f3b2546-ef55-49b2-8f31-dd8f4ecdcf93] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005484311s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-885748
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-gmz8s" [af09cb1a-668a-4764-b6a3-109a5be57346] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002941894s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-885748 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-885748 addons disable yakd --alsologtostderr -v=1: (5.714517894s)
--- PASS: TestAddons/parallel/Yakd (11.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.15s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-885748
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-885748: (11.889239239s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-885748
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-885748
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-885748
--- PASS: TestAddons/StoppedEnableDisable (12.15s)

                                                
                                    
x
+
TestCertOptions (35.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-850781 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0914 01:37:26.631242  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-850781 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.365051397s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-850781 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-850781 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-850781 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-850781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-850781
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-850781: (2.015661327s)
--- PASS: TestCertOptions (35.09s)

                                                
                                    
x
+
TestCertExpiration (234.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-172091 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-172091 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (34.155125233s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-172091 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-172091 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.632204508s)
helpers_test.go:175: Cleaning up "cert-expiration-172091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-172091
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-172091: (2.623729357s)
--- PASS: TestCertExpiration (234.41s)

                                                
                                    
x
+
TestForceSystemdFlag (41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-180517 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-180517 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.449018269s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-180517 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-180517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-180517
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-180517: (2.271102141s)
--- PASS: TestForceSystemdFlag (41.00s)

                                                
                                    
x
+
TestForceSystemdEnv (32.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-069929 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-069929 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.201813287s)
helpers_test.go:175: Cleaning up "force-systemd-env-069929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-069929
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-069929: (2.307018171s)
--- PASS: TestForceSystemdEnv (32.51s)

                                                
                                    
x
+
TestErrorSpam/setup (30.29s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-201812 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-201812 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-201812 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-201812 --driver=docker  --container-runtime=crio: (30.293095817s)
--- PASS: TestErrorSpam/setup (30.29s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 stop: (1.245510292s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-201812 --log_dir /tmp/nospam-201812 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19640-868698/.minikube/files/etc/test/nested/copy/874079/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-963815 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-963815 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (49.800595827s)
--- PASS: TestFunctional/serial/StartWithProxy (49.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-963815 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-963815 --alsologtostderr -v=8: (26.842354117s)
functional_test.go:663: soft start took 26.842983927s for "functional-963815" cluster.
--- PASS: TestFunctional/serial/SoftStart (26.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-963815 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-963815 cache add registry.k8s.io/pause:3.1: (1.431426515s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-963815 cache add registry.k8s.io/pause:3.3: (1.487092341s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-963815 cache add registry.k8s.io/pause:latest: (1.398639606s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-963815 /tmp/TestFunctionalserialCacheCmdcacheadd_local3708056274/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 cache add minikube-local-cache-test:functional-963815
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 cache delete minikube-local-cache-test:functional-963815
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-963815
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-963815 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (284.881888ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-963815 cache reload: (1.032807749s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 kubectl -- --context functional-963815 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-963815 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-963815 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-963815 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.27640055s)
functional_test.go:761: restart took 37.276503826s for "functional-963815" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-963815 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-963815 logs: (1.721763721s)
--- PASS: TestFunctional/serial/LogsCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 logs --file /tmp/TestFunctionalserialLogsFileCmd3161164964/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-963815 logs --file /tmp/TestFunctionalserialLogsFileCmd3161164964/001/logs.txt: (1.76373698s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-963815 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-963815
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-963815: exit status 115 (628.02998ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32070 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-963815 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-963815 config get cpus: exit status 14 (104.345465ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-963815 config get cpus: exit status 14 (94.598023ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-963815 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-963815 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 901751: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-963815 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-963815 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (187.125678ms)

                                                
                                                
-- stdout --
	* [functional-963815] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:57:58.864039  901451 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:57:58.864243  901451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:57:58.864254  901451 out.go:358] Setting ErrFile to fd 2...
	I0914 00:57:58.864260  901451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:57:58.864620  901451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 00:57:58.865396  901451 out.go:352] Setting JSON to false
	I0914 00:57:58.866578  901451 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":16823,"bootTime":1726258656,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0914 00:57:58.866664  901451 start.go:139] virtualization:  
	I0914 00:57:58.869773  901451 out.go:177] * [functional-963815] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 00:57:58.873384  901451 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:57:58.873502  901451 notify.go:220] Checking for updates...
	I0914 00:57:58.878436  901451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:57:58.881007  901451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 00:57:58.883572  901451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	I0914 00:57:58.886174  901451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 00:57:58.888885  901451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:57:58.891999  901451 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:57:58.892546  901451 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:57:58.924678  901451 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:57:58.924811  901451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:57:58.984278  901451 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 00:57:58.973808953 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:57:58.984394  901451 docker.go:318] overlay module found
	I0914 00:57:58.987426  901451 out.go:177] * Using the docker driver based on existing profile
	I0914 00:57:58.989975  901451 start.go:297] selected driver: docker
	I0914 00:57:58.989993  901451 start.go:901] validating driver "docker" against &{Name:functional-963815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-963815 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:57:58.990116  901451 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:57:58.993352  901451 out.go:201] 
	W0914 00:57:58.995994  901451 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 00:57:58.998649  901451 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-963815 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-963815 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-963815 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (202.636896ms)

                                                
                                                
-- stdout --
	* [functional-963815] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:57:58.672452  901408 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:57:58.672655  901408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:57:58.672683  901408 out.go:358] Setting ErrFile to fd 2...
	I0914 00:57:58.672704  901408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:57:58.673147  901408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 00:57:58.673604  901408 out.go:352] Setting JSON to false
	I0914 00:57:58.674663  901408 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":16823,"bootTime":1726258656,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0914 00:57:58.674756  901408 start.go:139] virtualization:  
	I0914 00:57:58.678294  901408 out.go:177] * [functional-963815] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0914 00:57:58.681139  901408 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:57:58.681195  901408 notify.go:220] Checking for updates...
	I0914 00:57:58.685804  901408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:57:58.688309  901408 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 00:57:58.690723  901408 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	I0914 00:57:58.693032  901408 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 00:57:58.695467  901408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:57:58.698436  901408 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 00:57:58.699137  901408 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:57:58.735365  901408 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:57:58.735524  901408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:57:58.798039  901408 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 00:57:58.787354308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:57:58.798169  901408 docker.go:318] overlay module found
	I0914 00:57:58.802563  901408 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0914 00:57:58.805246  901408 start.go:297] selected driver: docker
	I0914 00:57:58.805334  901408 start.go:901] validating driver "docker" against &{Name:functional-963815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-963815 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:57:58.805443  901408 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:57:58.808645  901408 out.go:201] 
	W0914 00:57:58.811263  901408 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 00:57:58.813893  901408 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-963815 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-963815 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-xkb69" [de15eaa1-9618-4b9d-81a0-23f149fc4f05] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-xkb69" [de15eaa1-9618-4b9d-81a0-23f149fc4f05] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.00396285s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32511
functional_test.go:1675: http://192.168.49.2:32511: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-xkb69

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32511
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [63209c18-99ce-41f8-9c70-2e2603f9b12c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003469748s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-963815 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-963815 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-963815 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-963815 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7ed3b782-0569-4570-9488-48afef4c4c73] Pending
helpers_test.go:344: "sp-pod" [7ed3b782-0569-4570-9488-48afef4c4c73] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7ed3b782-0569-4570-9488-48afef4c4c73] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003111154s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-963815 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-963815 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-963815 delete -f testdata/storage-provisioner/pod.yaml: (1.150027505s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-963815 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [37f653f1-f6ba-459e-a65f-a3c20e1de6b4] Pending
helpers_test.go:344: "sp-pod" [37f653f1-f6ba-459e-a65f-a3c20e1de6b4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003621511s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-963815 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh -n functional-963815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 cp functional-963815:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1458368501/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh -n functional-963815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh -n functional-963815 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/874079/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo cat /etc/test/nested/copy/874079/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/874079.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo cat /etc/ssl/certs/874079.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/874079.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo cat /usr/share/ca-certificates/874079.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/8740792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo cat /etc/ssl/certs/8740792.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/8740792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo cat /usr/share/ca-certificates/8740792.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-963815 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-963815 ssh "sudo systemctl is-active docker": exit status 1 (469.596954ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-963815 ssh "sudo systemctl is-active containerd": exit status 1 (313.775658ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-963815 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-963815 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-963815 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 899369: os: process already finished
helpers_test.go:502: unable to terminate pid 899188: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-963815 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-963815 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-963815 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1c00db94-3c2e-4e1f-8dd8-368709d532c2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1c00db94-3c2e-4e1f-8dd8-368709d532c2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004209279s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-963815 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.104.99 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-963815 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-963815 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-963815 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-2phzj" [b7dcc85a-3d88-40ea-bf5e-e0c84e7cd34b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-2phzj" [b7dcc85a-3d88-40ea-bf5e-e0c84e7cd34b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.008000315s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "311.092764ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "61.212289ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "320.925424ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "63.272949ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdany-port451136363/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726275474471025605" to /tmp/TestFunctionalparallelMountCmdany-port451136363/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726275474471025605" to /tmp/TestFunctionalparallelMountCmdany-port451136363/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726275474471025605" to /tmp/TestFunctionalparallelMountCmdany-port451136363/001/test-1726275474471025605
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.219183ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 00:57 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 00:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 00:57 test-1726275474471025605
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh cat /mount-9p/test-1726275474471025605
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-963815 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2c4b649f-61bb-438a-8cf4-0e8c22b4b5ee] Pending
helpers_test.go:344: "busybox-mount" [2c4b649f-61bb-438a-8cf4-0e8c22b4b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2c4b649f-61bb-438a-8cf4-0e8c22b4b5ee] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2c4b649f-61bb-438a-8cf4-0e8c22b4b5ee] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005735219s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-963815 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdany-port451136363/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 service list -o json
functional_test.go:1494: Took "609.225257ms" to run "out/minikube-linux-arm64 -p functional-963815 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31362
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31362
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdspecific-port115385882/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (550.386258ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdspecific-port115385882/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-963815 ssh "sudo umount -f /mount-9p": exit status 1 (303.680244ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-963815 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdspecific-port115385882/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
2024/09/14 00:58:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3755878915/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3755878915/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3755878915/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T" /mount1: exit status 1 (900.349131ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-963815 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3755878915/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3755878915/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-963815 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3755878915/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-963815 version -o=json --components: (1.086698391s)
--- PASS: TestFunctional/parallel/Version/components (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-963815 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-963815
localhost/kicbase/echo-server:functional-963815
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-963815 image ls --format short --alsologtostderr:
I0914 00:58:16.247621  904287 out.go:345] Setting OutFile to fd 1 ...
I0914 00:58:16.247893  904287 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:58:16.247925  904287 out.go:358] Setting ErrFile to fd 2...
I0914 00:58:16.247946  904287 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:58:16.248323  904287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
I0914 00:58:16.249087  904287 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 00:58:16.249275  904287 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 00:58:16.249875  904287 cli_runner.go:164] Run: docker container inspect functional-963815 --format={{.State.Status}}
I0914 00:58:16.269733  904287 ssh_runner.go:195] Run: systemctl --version
I0914 00:58:16.269780  904287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-963815
I0914 00:58:16.306808  904287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/functional-963815/id_rsa Username:docker}
I0914 00:58:16.398262  904287 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-963815 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| docker.io/library/nginx                 | latest             | 195245f0c7927 | 197MB  |
| localhost/kicbase/echo-server           | functional-963815  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | alpine             | b887aca7aed61 | 48.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/minikube-local-cache-test     | functional-963815  | ba05c6165ebcd | 3.33kB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-963815 image ls --format table --alsologtostderr:
I0914 00:58:16.540842  904356 out.go:345] Setting OutFile to fd 1 ...
I0914 00:58:16.541042  904356 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:58:16.541072  904356 out.go:358] Setting ErrFile to fd 2...
I0914 00:58:16.541093  904356 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:58:16.541355  904356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
I0914 00:58:16.542017  904356 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 00:58:16.542209  904356 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 00:58:16.544208  904356 cli_runner.go:164] Run: docker container inspect functional-963815 --format={{.State.Status}}
I0914 00:58:16.567916  904356 ssh_runner.go:195] Run: systemctl --version
I0914 00:58:16.567968  904356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-963815
I0914 00:58:16.595564  904356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/functional-963815/id_rsa Username:docker}
I0914 00:58:16.686824  904356 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-963815 image ls --format json --alsologtostderr:
[{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172029"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4",
"repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":
"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8
199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{
"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-963815"],"size":"4788229"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["regis
try.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48375489"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba05c6165ebcd490b7e0e251b965fff00d0882dd90a317fe65c0cb5c46be2945","repoDigests":["localhost/minikube-local-cache-test@sha256:4f75041616ea7a5c8c8ea77bd2
a29aab5c739f193cd60f43ec99a35b40b0c3dc"],"repoTags":["localhost/minikube-local-cache-test:functional-963815"],"size":"3330"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-963815 image ls --format json --alsologtostderr:
I0914 00:58:16.512162  904351 out.go:345] Setting OutFile to fd 1 ...
I0914 00:58:16.512375  904351 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:58:16.512386  904351 out.go:358] Setting ErrFile to fd 2...
I0914 00:58:16.512392  904351 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:58:16.512690  904351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
I0914 00:58:16.513428  904351 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 00:58:16.513617  904351 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 00:58:16.514141  904351 cli_runner.go:164] Run: docker container inspect functional-963815 --format={{.State.Status}}
I0914 00:58:16.545656  904351 ssh_runner.go:195] Run: systemctl --version
I0914 00:58:16.545712  904351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-963815
I0914 00:58:16.584323  904351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/functional-963815/id_rsa Username:docker}
I0914 00:58:16.670249  904351 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-963815 image ls --format yaml --alsologtostderr:
- id: ba05c6165ebcd490b7e0e251b965fff00d0882dd90a317fe65c0cb5c46be2945
repoDigests:
- localhost/minikube-local-cache-test@sha256:4f75041616ea7a5c8c8ea77bd2a29aab5c739f193cd60f43ec99a35b40b0c3dc
repoTags:
- localhost/minikube-local-cache-test:functional-963815
size: "3330"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "48375489"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-963815
size: "4788229"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171
repoTags:
- docker.io/library/nginx:latest
size: "197172029"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-963815 image ls --format yaml --alsologtostderr:
I0914 00:58:16.249573  904288 out.go:345] Setting OutFile to fd 1 ...
I0914 00:58:16.249734  904288 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:58:16.249746  904288 out.go:358] Setting ErrFile to fd 2...
I0914 00:58:16.249752  904288 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:58:16.250058  904288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
I0914 00:58:16.250712  904288 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 00:58:16.250863  904288 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 00:58:16.251376  904288 cli_runner.go:164] Run: docker container inspect functional-963815 --format={{.State.Status}}
I0914 00:58:16.269728  904288 ssh_runner.go:195] Run: systemctl --version
I0914 00:58:16.269780  904288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-963815
I0914 00:58:16.297384  904288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/functional-963815/id_rsa Username:docker}
I0914 00:58:16.381789  904288 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-963815 ssh pgrep buildkitd: exit status 1 (256.991951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image build -t localhost/my-image:functional-963815 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-963815 image build -t localhost/my-image:functional-963815 testdata/build --alsologtostderr: (2.912840838s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-963815 image build -t localhost/my-image:functional-963815 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 425ee493c7c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-963815
--> fbe146bcc17
Successfully tagged localhost/my-image:functional-963815
fbe146bcc1775d0f8867a3bcec53da0740bb44ada33289bda3656635d52d72e1
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-963815 image build -t localhost/my-image:functional-963815 testdata/build --alsologtostderr:
I0914 00:58:17.031992  904473 out.go:345] Setting OutFile to fd 1 ...
I0914 00:58:17.032620  904473 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:58:17.032636  904473 out.go:358] Setting ErrFile to fd 2...
I0914 00:58:17.032642  904473 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:58:17.032915  904473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
I0914 00:58:17.033623  904473 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 00:58:17.034254  904473 config.go:182] Loaded profile config "functional-963815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0914 00:58:17.034765  904473 cli_runner.go:164] Run: docker container inspect functional-963815 --format={{.State.Status}}
I0914 00:58:17.052382  904473 ssh_runner.go:195] Run: systemctl --version
I0914 00:58:17.052451  904473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-963815
I0914 00:58:17.068924  904473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/functional-963815/id_rsa Username:docker}
I0914 00:58:17.158181  904473 build_images.go:161] Building image from path: /tmp/build.4131827079.tar
I0914 00:58:17.158258  904473 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 00:58:17.168420  904473 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4131827079.tar
I0914 00:58:17.171956  904473 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4131827079.tar: stat -c "%s %y" /var/lib/minikube/build/build.4131827079.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4131827079.tar': No such file or directory
I0914 00:58:17.171988  904473 ssh_runner.go:362] scp /tmp/build.4131827079.tar --> /var/lib/minikube/build/build.4131827079.tar (3072 bytes)
I0914 00:58:17.196695  904473 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4131827079
I0914 00:58:17.207514  904473 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4131827079 -xf /var/lib/minikube/build/build.4131827079.tar
I0914 00:58:17.216505  904473 crio.go:315] Building image: /var/lib/minikube/build/build.4131827079
I0914 00:58:17.216600  904473 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-963815 /var/lib/minikube/build/build.4131827079 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0914 00:58:19.873185  904473 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-963815 /var/lib/minikube/build/build.4131827079 --cgroup-manager=cgroupfs: (2.656557736s)
I0914 00:58:19.873281  904473 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4131827079
I0914 00:58:19.882298  904473 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4131827079.tar
I0914 00:58:19.890949  904473 build_images.go:217] Built localhost/my-image:functional-963815 from /tmp/build.4131827079.tar
I0914 00:58:19.890980  904473 build_images.go:133] succeeded building to: functional-963815
I0914 00:58:19.890986  904473 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-963815
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image load --daemon kicbase/echo-server:functional-963815 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-963815 image load --daemon kicbase/echo-server:functional-963815 --alsologtostderr: (1.408744086s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image load --daemon kicbase/echo-server:functional-963815 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-963815
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image load --daemon kicbase/echo-server:functional-963815 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image save kicbase/echo-server:functional-963815 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image rm kicbase/echo-server:functional-963815 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-963815
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-963815 image save --daemon kicbase/echo-server:functional-963815 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-963815
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-963815
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-963815
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-963815
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (173.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-401927 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0914 00:59:02.896626  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:02.903611  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:02.914974  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:02.937075  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:02.979149  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:03.060413  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:03.221826  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:03.543441  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:04.185411  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:05.467575  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:08.029758  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:13.151776  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:23.393872  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:59:43.875478  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:00:24.837444  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-401927 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m52.860564554s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (173.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-401927 -- rollout status deployment/busybox: (5.773589965s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-g4mrh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-kp5pc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-t72d5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-g4mrh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-kp5pc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-t72d5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-g4mrh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-kp5pc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-t72d5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-g4mrh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-g4mrh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-kp5pc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-kp5pc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-t72d5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401927 -- exec busybox-7dff88458-t72d5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-401927 -v=7 --alsologtostderr
E0914 01:01:46.759526  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-401927 -v=7 --alsologtostderr: (34.675990883s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-401927 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp testdata/cp-test.txt ha-401927:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1688928420/001/cp-test_ha-401927.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927:/home/docker/cp-test.txt ha-401927-m02:/home/docker/cp-test_ha-401927_ha-401927-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m02 "sudo cat /home/docker/cp-test_ha-401927_ha-401927-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927:/home/docker/cp-test.txt ha-401927-m03:/home/docker/cp-test_ha-401927_ha-401927-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m03 "sudo cat /home/docker/cp-test_ha-401927_ha-401927-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927:/home/docker/cp-test.txt ha-401927-m04:/home/docker/cp-test_ha-401927_ha-401927-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m04 "sudo cat /home/docker/cp-test_ha-401927_ha-401927-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp testdata/cp-test.txt ha-401927-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1688928420/001/cp-test_ha-401927-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m02:/home/docker/cp-test.txt ha-401927:/home/docker/cp-test_ha-401927-m02_ha-401927.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927 "sudo cat /home/docker/cp-test_ha-401927-m02_ha-401927.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m02:/home/docker/cp-test.txt ha-401927-m03:/home/docker/cp-test_ha-401927-m02_ha-401927-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m03 "sudo cat /home/docker/cp-test_ha-401927-m02_ha-401927-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m02:/home/docker/cp-test.txt ha-401927-m04:/home/docker/cp-test_ha-401927-m02_ha-401927-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m04 "sudo cat /home/docker/cp-test_ha-401927-m02_ha-401927-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp testdata/cp-test.txt ha-401927-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1688928420/001/cp-test_ha-401927-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m03:/home/docker/cp-test.txt ha-401927:/home/docker/cp-test_ha-401927-m03_ha-401927.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927 "sudo cat /home/docker/cp-test_ha-401927-m03_ha-401927.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m03:/home/docker/cp-test.txt ha-401927-m02:/home/docker/cp-test_ha-401927-m03_ha-401927-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m02 "sudo cat /home/docker/cp-test_ha-401927-m03_ha-401927-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m03:/home/docker/cp-test.txt ha-401927-m04:/home/docker/cp-test_ha-401927-m03_ha-401927-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m04 "sudo cat /home/docker/cp-test_ha-401927-m03_ha-401927-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp testdata/cp-test.txt ha-401927-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1688928420/001/cp-test_ha-401927-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m04:/home/docker/cp-test.txt ha-401927:/home/docker/cp-test_ha-401927-m04_ha-401927.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927 "sudo cat /home/docker/cp-test_ha-401927-m04_ha-401927.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m04:/home/docker/cp-test.txt ha-401927-m02:/home/docker/cp-test_ha-401927-m04_ha-401927-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m02 "sudo cat /home/docker/cp-test_ha-401927-m04_ha-401927-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 cp ha-401927-m04:/home/docker/cp-test.txt ha-401927-m03:/home/docker/cp-test_ha-401927-m04_ha-401927-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 ssh -n ha-401927-m03 "sudo cat /home/docker/cp-test_ha-401927-m04_ha-401927-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 node stop m02 -v=7 --alsologtostderr
E0914 01:02:26.630238  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:26.636637  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:26.648004  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:26.669361  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:26.710716  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:26.792336  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:26.953983  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:27.275655  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:27.917722  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:29.199512  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:31.761491  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-401927 node stop m02 -v=7 --alsologtostderr: (12.087395119s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr: exit status 7 (734.74529ms)

                                                
                                                
-- stdout --
	ha-401927
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-401927-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401927-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-401927-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 01:02:34.092212  920143 out.go:345] Setting OutFile to fd 1 ...
	I0914 01:02:34.092449  920143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:02:34.092718  920143 out.go:358] Setting ErrFile to fd 2...
	I0914 01:02:34.092755  920143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:02:34.093047  920143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 01:02:34.093288  920143 out.go:352] Setting JSON to false
	I0914 01:02:34.093343  920143 mustload.go:65] Loading cluster: ha-401927
	I0914 01:02:34.093871  920143 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:02:34.093916  920143 status.go:255] checking status of ha-401927 ...
	I0914 01:02:34.094481  920143 cli_runner.go:164] Run: docker container inspect ha-401927 --format={{.State.Status}}
	I0914 01:02:34.097360  920143 notify.go:220] Checking for updates...
	I0914 01:02:34.116036  920143 status.go:330] ha-401927 host status = "Running" (err=<nil>)
	I0914 01:02:34.116061  920143 host.go:66] Checking if "ha-401927" exists ...
	I0914 01:02:34.116368  920143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927
	I0914 01:02:34.142756  920143 host.go:66] Checking if "ha-401927" exists ...
	I0914 01:02:34.143094  920143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 01:02:34.143139  920143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927
	I0914 01:02:34.166086  920143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33579 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927/id_rsa Username:docker}
	I0914 01:02:34.255043  920143 ssh_runner.go:195] Run: systemctl --version
	I0914 01:02:34.259807  920143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:02:34.272004  920143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 01:02:34.330925  920143 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-14 01:02:34.31972814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 01:02:34.331577  920143 kubeconfig.go:125] found "ha-401927" server: "https://192.168.49.254:8443"
	I0914 01:02:34.331610  920143 api_server.go:166] Checking apiserver status ...
	I0914 01:02:34.331668  920143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:34.345543  920143 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup
	I0914 01:02:34.355800  920143 api_server.go:182] apiserver freezer: "9:freezer:/docker/a653e1e66e311cc3f4d67545172636f17a5f1830f207f407847e9b3124791583/crio/crio-1d70f2b34bea19c6d160bfc2235f8c574fa34be660155209f0c2fce80cff98b6"
	I0914 01:02:34.355869  920143 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a653e1e66e311cc3f4d67545172636f17a5f1830f207f407847e9b3124791583/crio/crio-1d70f2b34bea19c6d160bfc2235f8c574fa34be660155209f0c2fce80cff98b6/freezer.state
	I0914 01:02:34.369023  920143 api_server.go:204] freezer state: "THAWED"
	I0914 01:02:34.369057  920143 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0914 01:02:34.390979  920143 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0914 01:02:34.391011  920143 status.go:422] ha-401927 apiserver status = Running (err=<nil>)
	I0914 01:02:34.391024  920143 status.go:257] ha-401927 status: &{Name:ha-401927 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 01:02:34.391059  920143 status.go:255] checking status of ha-401927-m02 ...
	I0914 01:02:34.391388  920143 cli_runner.go:164] Run: docker container inspect ha-401927-m02 --format={{.State.Status}}
	I0914 01:02:34.408375  920143 status.go:330] ha-401927-m02 host status = "Stopped" (err=<nil>)
	I0914 01:02:34.408403  920143 status.go:343] host is not running, skipping remaining checks
	I0914 01:02:34.408410  920143 status.go:257] ha-401927-m02 status: &{Name:ha-401927-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 01:02:34.408444  920143 status.go:255] checking status of ha-401927-m03 ...
	I0914 01:02:34.408805  920143 cli_runner.go:164] Run: docker container inspect ha-401927-m03 --format={{.State.Status}}
	I0914 01:02:34.426343  920143 status.go:330] ha-401927-m03 host status = "Running" (err=<nil>)
	I0914 01:02:34.426370  920143 host.go:66] Checking if "ha-401927-m03" exists ...
	I0914 01:02:34.426711  920143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927-m03
	I0914 01:02:34.443576  920143 host.go:66] Checking if "ha-401927-m03" exists ...
	I0914 01:02:34.443911  920143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 01:02:34.443964  920143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m03
	I0914 01:02:34.460796  920143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33589 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m03/id_rsa Username:docker}
	I0914 01:02:34.547301  920143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:02:34.558977  920143 kubeconfig.go:125] found "ha-401927" server: "https://192.168.49.254:8443"
	I0914 01:02:34.559007  920143 api_server.go:166] Checking apiserver status ...
	I0914 01:02:34.559058  920143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:02:34.570109  920143 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1353/cgroup
	I0914 01:02:34.579250  920143 api_server.go:182] apiserver freezer: "9:freezer:/docker/6834abc6997ba378aa6888ab9a6f6cf97334e7d06ff6a8ec9a09916dcf77d85e/crio/crio-041bd593ed40406cf8dffd36034993efb11a4b93ad745a9d7415745761073751"
	I0914 01:02:34.579328  920143 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6834abc6997ba378aa6888ab9a6f6cf97334e7d06ff6a8ec9a09916dcf77d85e/crio/crio-041bd593ed40406cf8dffd36034993efb11a4b93ad745a9d7415745761073751/freezer.state
	I0914 01:02:34.588158  920143 api_server.go:204] freezer state: "THAWED"
	I0914 01:02:34.588194  920143 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0914 01:02:34.596034  920143 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0914 01:02:34.596065  920143 status.go:422] ha-401927-m03 apiserver status = Running (err=<nil>)
	I0914 01:02:34.596075  920143 status.go:257] ha-401927-m03 status: &{Name:ha-401927-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 01:02:34.596109  920143 status.go:255] checking status of ha-401927-m04 ...
	I0914 01:02:34.596425  920143 cli_runner.go:164] Run: docker container inspect ha-401927-m04 --format={{.State.Status}}
	I0914 01:02:34.612301  920143 status.go:330] ha-401927-m04 host status = "Running" (err=<nil>)
	I0914 01:02:34.612323  920143 host.go:66] Checking if "ha-401927-m04" exists ...
	I0914 01:02:34.612621  920143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401927-m04
	I0914 01:02:34.630323  920143 host.go:66] Checking if "ha-401927-m04" exists ...
	I0914 01:02:34.630640  920143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 01:02:34.630697  920143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401927-m04
	I0914 01:02:34.649405  920143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/ha-401927-m04/id_rsa Username:docker}
	I0914 01:02:34.746560  920143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:02:34.761736  920143 status.go:257] ha-401927-m04 status: &{Name:ha-401927-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 node start m02 -v=7 --alsologtostderr
E0914 01:02:36.882838  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:02:47.125090  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-401927 node start m02 -v=7 --alsologtostderr: (30.950630895s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr: (1.222524964s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
E0914 01:03:07.606910  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (6.429872981s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (205.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-401927 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-401927 -v=7 --alsologtostderr
E0914 01:03:48.568302  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-401927 -v=7 --alsologtostderr: (37.148394876s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-401927 --wait=true -v=7 --alsologtostderr
E0914 01:04:02.897406  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:04:30.601992  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:05:10.490119  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-401927 --wait=true -v=7 --alsologtostderr: (2m47.949617982s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-401927
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (205.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-401927 node delete m03 -v=7 --alsologtostderr: (11.472124524s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 stop -v=7 --alsologtostderr
E0914 01:07:26.630725  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-401927 stop -v=7 --alsologtostderr: (35.697254078s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr: exit status 7 (112.776043ms)

                                                
                                                
-- stdout --
	ha-401927
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401927-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401927-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 01:07:28.013046  935054 out.go:345] Setting OutFile to fd 1 ...
	I0914 01:07:28.013243  935054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:07:28.013311  935054 out.go:358] Setting ErrFile to fd 2...
	I0914 01:07:28.013332  935054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:07:28.013595  935054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 01:07:28.013817  935054 out.go:352] Setting JSON to false
	I0914 01:07:28.013881  935054 mustload.go:65] Loading cluster: ha-401927
	I0914 01:07:28.013964  935054 notify.go:220] Checking for updates...
	I0914 01:07:28.014399  935054 config.go:182] Loaded profile config "ha-401927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:07:28.014435  935054 status.go:255] checking status of ha-401927 ...
	I0914 01:07:28.015330  935054 cli_runner.go:164] Run: docker container inspect ha-401927 --format={{.State.Status}}
	I0914 01:07:28.034906  935054 status.go:330] ha-401927 host status = "Stopped" (err=<nil>)
	I0914 01:07:28.034928  935054 status.go:343] host is not running, skipping remaining checks
	I0914 01:07:28.034935  935054 status.go:257] ha-401927 status: &{Name:ha-401927 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 01:07:28.034961  935054 status.go:255] checking status of ha-401927-m02 ...
	I0914 01:07:28.035335  935054 cli_runner.go:164] Run: docker container inspect ha-401927-m02 --format={{.State.Status}}
	I0914 01:07:28.058091  935054 status.go:330] ha-401927-m02 host status = "Stopped" (err=<nil>)
	I0914 01:07:28.058111  935054 status.go:343] host is not running, skipping remaining checks
	I0914 01:07:28.058118  935054 status.go:257] ha-401927-m02 status: &{Name:ha-401927-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 01:07:28.058138  935054 status.go:255] checking status of ha-401927-m04 ...
	I0914 01:07:28.058460  935054 cli_runner.go:164] Run: docker container inspect ha-401927-m04 --format={{.State.Status}}
	I0914 01:07:28.075919  935054 status.go:330] ha-401927-m04 host status = "Stopped" (err=<nil>)
	I0914 01:07:28.075939  935054 status.go:343] host is not running, skipping remaining checks
	I0914 01:07:28.075947  935054 status.go:257] ha-401927-m04 status: &{Name:ha-401927-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (69.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-401927 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-401927 --control-plane -v=7 --alsologtostderr: (1m8.385887297s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-401927 status -v=7 --alsologtostderr: (1.002287801s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (69.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-426266 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-426266 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m18.342423206s)
--- PASS: TestJSONOutput/start/Command (78.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-426266 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-426266 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-426266 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-426266 --output=json --user=testUser: (5.881730635s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-693157 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-693157 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.675327ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a9a7fd39-16bc-4067-95a1-647697b9c414","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-693157] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"13907a48-204d-4ce5-b664-0dbdcfbad0df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"7fc0ffdc-3607-4f69-a749-2dfb2afe5fd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eca27aa9-ba25-4024-bfb0-e71f0a472ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig"}}
	{"specversion":"1.0","id":"51238240-d1cc-43ca-8ae6-79e651ba6a47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube"}}
	{"specversion":"1.0","id":"568116a7-2e94-4d44-9ebf-4127aba4cdba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e6a65f68-0619-42bb-9a66-7c1af8900831","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5abbaf0a-8112-49ec-bdfc-8263a6ba725a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-693157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-693157
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-639809 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-639809 --network=: (35.157702502s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-639809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-639809
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-639809: (2.089881727s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.27s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-445545 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-445545 --network=bridge: (34.231469651s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-445545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-445545
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-445545: (1.994568291s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.26s)

                                                
                                    
x
+
TestKicExistingNetwork (33.56s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-866998 --network=existing-network
E0914 01:14:02.896490  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-866998 --network=existing-network: (31.463401122s)
helpers_test.go:175: Cleaning up "existing-network-866998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-866998
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-866998: (1.94585499s)
--- PASS: TestKicExistingNetwork (33.56s)

                                                
                                    
x
+
TestKicCustomSubnet (35.91s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-634926 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-634926 --subnet=192.168.60.0/24: (33.80533834s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-634926 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-634926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-634926
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-634926: (2.085586661s)
--- PASS: TestKicCustomSubnet (35.91s)

                                                
                                    
x
+
TestKicStaticIP (32.07s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-537813 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-537813 --static-ip=192.168.200.200: (29.849148074s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-537813 ip
helpers_test.go:175: Cleaning up "static-ip-537813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-537813
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-537813: (2.077117857s)
--- PASS: TestKicStaticIP (32.07s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-004163 --driver=docker  --container-runtime=crio
E0914 01:15:25.963892  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-004163 --driver=docker  --container-runtime=crio: (29.804064787s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-006829 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-006829 --driver=docker  --container-runtime=crio: (34.156857112s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-004163
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-006829
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-006829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-006829
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-006829: (2.019036325s)
helpers_test.go:175: Cleaning up "first-004163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-004163
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-004163: (1.934443463s)
--- PASS: TestMinikubeProfile (69.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-412291 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-412291 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.718480846s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-412291 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-414309 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-414309 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.819522933s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-414309 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-412291 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-412291 --alsologtostderr -v=5: (1.611755872s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-414309 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-414309
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-414309: (1.200393428s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-414309
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-414309: (6.974530725s)
--- PASS: TestMountStart/serial/RestartStopped (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-414309 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-128403 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0914 01:17:26.630490  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-128403 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m42.732084751s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- rollout status deployment/busybox
E0914 01:18:49.693607  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-128403 -- rollout status deployment/busybox: (5.028893119s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- exec busybox-7dff88458-8jlcj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- exec busybox-7dff88458-ldh84 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- exec busybox-7dff88458-8jlcj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- exec busybox-7dff88458-ldh84 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- exec busybox-7dff88458-8jlcj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- exec busybox-7dff88458-ldh84 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- exec busybox-7dff88458-8jlcj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- exec busybox-7dff88458-8jlcj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- exec busybox-7dff88458-ldh84 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-128403 -- exec busybox-7dff88458-ldh84 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-128403 -v 3 --alsologtostderr
E0914 01:19:02.896112  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-128403 -v 3 --alsologtostderr: (29.976541559s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-128403 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp testdata/cp-test.txt multinode-128403:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp multinode-128403:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3254618540/001/cp-test_multinode-128403.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp multinode-128403:/home/docker/cp-test.txt multinode-128403-m02:/home/docker/cp-test_multinode-128403_multinode-128403-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m02 "sudo cat /home/docker/cp-test_multinode-128403_multinode-128403-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp multinode-128403:/home/docker/cp-test.txt multinode-128403-m03:/home/docker/cp-test_multinode-128403_multinode-128403-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m03 "sudo cat /home/docker/cp-test_multinode-128403_multinode-128403-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp testdata/cp-test.txt multinode-128403-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp multinode-128403-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3254618540/001/cp-test_multinode-128403-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp multinode-128403-m02:/home/docker/cp-test.txt multinode-128403:/home/docker/cp-test_multinode-128403-m02_multinode-128403.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403 "sudo cat /home/docker/cp-test_multinode-128403-m02_multinode-128403.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp multinode-128403-m02:/home/docker/cp-test.txt multinode-128403-m03:/home/docker/cp-test_multinode-128403-m02_multinode-128403-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m03 "sudo cat /home/docker/cp-test_multinode-128403-m02_multinode-128403-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp testdata/cp-test.txt multinode-128403-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp multinode-128403-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3254618540/001/cp-test_multinode-128403-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp multinode-128403-m03:/home/docker/cp-test.txt multinode-128403:/home/docker/cp-test_multinode-128403-m03_multinode-128403.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403 "sudo cat /home/docker/cp-test_multinode-128403-m03_multinode-128403.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 cp multinode-128403-m03:/home/docker/cp-test.txt multinode-128403-m02:/home/docker/cp-test_multinode-128403-m03_multinode-128403-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 ssh -n multinode-128403-m02 "sudo cat /home/docker/cp-test_multinode-128403-m03_multinode-128403-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-128403 node stop m03: (1.196316056s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-128403 status: exit status 7 (500.832872ms)

                                                
                                                
-- stdout --
	multinode-128403
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-128403-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-128403-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-128403 status --alsologtostderr: exit status 7 (504.604815ms)

                                                
                                                
-- stdout --
	multinode-128403
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-128403-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-128403-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 01:19:34.939287  989202 out.go:345] Setting OutFile to fd 1 ...
	I0914 01:19:34.939484  989202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:19:34.939498  989202 out.go:358] Setting ErrFile to fd 2...
	I0914 01:19:34.939503  989202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:19:34.939761  989202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 01:19:34.939953  989202 out.go:352] Setting JSON to false
	I0914 01:19:34.939992  989202 mustload.go:65] Loading cluster: multinode-128403
	I0914 01:19:34.940037  989202 notify.go:220] Checking for updates...
	I0914 01:19:34.940417  989202 config.go:182] Loaded profile config "multinode-128403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:19:34.940438  989202 status.go:255] checking status of multinode-128403 ...
	I0914 01:19:34.941557  989202 cli_runner.go:164] Run: docker container inspect multinode-128403 --format={{.State.Status}}
	I0914 01:19:34.959553  989202 status.go:330] multinode-128403 host status = "Running" (err=<nil>)
	I0914 01:19:34.959578  989202 host.go:66] Checking if "multinode-128403" exists ...
	I0914 01:19:34.959915  989202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-128403
	I0914 01:19:34.982768  989202 host.go:66] Checking if "multinode-128403" exists ...
	I0914 01:19:34.983103  989202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 01:19:34.983158  989202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-128403
	I0914 01:19:35.001405  989202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33699 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/multinode-128403/id_rsa Username:docker}
	I0914 01:19:35.092015  989202 ssh_runner.go:195] Run: systemctl --version
	I0914 01:19:35.096619  989202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:19:35.109330  989202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 01:19:35.175092  989202 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-14 01:19:35.164199279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 01:19:35.175805  989202 kubeconfig.go:125] found "multinode-128403" server: "https://192.168.67.2:8443"
	I0914 01:19:35.175846  989202 api_server.go:166] Checking apiserver status ...
	I0914 01:19:35.175891  989202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:19:35.188091  989202 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1353/cgroup
	I0914 01:19:35.198932  989202 api_server.go:182] apiserver freezer: "9:freezer:/docker/138cbdf19e89f80ca672e151b697ab255a77c79455484d349f188df4f578fb28/crio/crio-7b8352c83fb48c8b659645944109b1981a2592f5a0eda4b2fd20a5ea7c3fd3d7"
	I0914 01:19:35.199007  989202 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/138cbdf19e89f80ca672e151b697ab255a77c79455484d349f188df4f578fb28/crio/crio-7b8352c83fb48c8b659645944109b1981a2592f5a0eda4b2fd20a5ea7c3fd3d7/freezer.state
	I0914 01:19:35.208891  989202 api_server.go:204] freezer state: "THAWED"
	I0914 01:19:35.208923  989202 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 01:19:35.216830  989202 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0914 01:19:35.216858  989202 status.go:422] multinode-128403 apiserver status = Running (err=<nil>)
	I0914 01:19:35.216869  989202 status.go:257] multinode-128403 status: &{Name:multinode-128403 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 01:19:35.216887  989202 status.go:255] checking status of multinode-128403-m02 ...
	I0914 01:19:35.217217  989202 cli_runner.go:164] Run: docker container inspect multinode-128403-m02 --format={{.State.Status}}
	I0914 01:19:35.234493  989202 status.go:330] multinode-128403-m02 host status = "Running" (err=<nil>)
	I0914 01:19:35.234524  989202 host.go:66] Checking if "multinode-128403-m02" exists ...
	I0914 01:19:35.234828  989202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-128403-m02
	I0914 01:19:35.251637  989202 host.go:66] Checking if "multinode-128403-m02" exists ...
	I0914 01:19:35.251958  989202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 01:19:35.252013  989202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-128403-m02
	I0914 01:19:35.271976  989202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33704 SSHKeyPath:/home/jenkins/minikube-integration/19640-868698/.minikube/machines/multinode-128403-m02/id_rsa Username:docker}
	I0914 01:19:35.358425  989202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 01:19:35.370032  989202 status.go:257] multinode-128403-m02 status: &{Name:multinode-128403-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 01:19:35.370068  989202 status.go:255] checking status of multinode-128403-m03 ...
	I0914 01:19:35.370421  989202 cli_runner.go:164] Run: docker container inspect multinode-128403-m03 --format={{.State.Status}}
	I0914 01:19:35.386611  989202 status.go:330] multinode-128403-m03 host status = "Stopped" (err=<nil>)
	I0914 01:19:35.386634  989202 status.go:343] host is not running, skipping remaining checks
	I0914 01:19:35.386642  989202 status.go:257] multinode-128403-m03 status: &{Name:multinode-128403-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-128403 node start m03 -v=7 --alsologtostderr: (9.214327518s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-128403
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-128403
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-128403: (24.96601926s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-128403 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-128403 --wait=true -v=8 --alsologtostderr: (1m18.312472305s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-128403
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-128403 node delete m03: (4.872691178s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-128403 stop: (23.68624159s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-128403 status: exit status 7 (105.227676ms)

                                                
                                                
-- stdout --
	multinode-128403
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-128403-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-128403 status --alsologtostderr: exit status 7 (91.086972ms)

                                                
                                                
-- stdout --
	multinode-128403
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-128403-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 01:21:58.180351  997007 out.go:345] Setting OutFile to fd 1 ...
	I0914 01:21:58.180484  997007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:21:58.180499  997007 out.go:358] Setting ErrFile to fd 2...
	I0914 01:21:58.180505  997007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:21:58.180844  997007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 01:21:58.181059  997007 out.go:352] Setting JSON to false
	I0914 01:21:58.181082  997007 mustload.go:65] Loading cluster: multinode-128403
	I0914 01:21:58.181353  997007 notify.go:220] Checking for updates...
	I0914 01:21:58.181841  997007 config.go:182] Loaded profile config "multinode-128403": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:21:58.181861  997007 status.go:255] checking status of multinode-128403 ...
	I0914 01:21:58.182427  997007 cli_runner.go:164] Run: docker container inspect multinode-128403 --format={{.State.Status}}
	I0914 01:21:58.200671  997007 status.go:330] multinode-128403 host status = "Stopped" (err=<nil>)
	I0914 01:21:58.200694  997007 status.go:343] host is not running, skipping remaining checks
	I0914 01:21:58.200701  997007 status.go:257] multinode-128403 status: &{Name:multinode-128403 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 01:21:58.200735  997007 status.go:255] checking status of multinode-128403-m02 ...
	I0914 01:21:58.201061  997007 cli_runner.go:164] Run: docker container inspect multinode-128403-m02 --format={{.State.Status}}
	I0914 01:21:58.224305  997007 status.go:330] multinode-128403-m02 host status = "Stopped" (err=<nil>)
	I0914 01:21:58.224326  997007 status.go:343] host is not running, skipping remaining checks
	I0914 01:21:58.224333  997007 status.go:257] multinode-128403-m02 status: &{Name:multinode-128403-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-128403 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0914 01:22:26.630507  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-128403 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (54.239492446s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-128403 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-128403
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-128403-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-128403-m02 --driver=docker  --container-runtime=crio: exit status 14 (88.968121ms)

                                                
                                                
-- stdout --
	* [multinode-128403-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-128403-m02' is duplicated with machine name 'multinode-128403-m02' in profile 'multinode-128403'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-128403-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-128403-m03 --driver=docker  --container-runtime=crio: (34.398991447s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-128403
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-128403: exit status 80 (310.19592ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-128403 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-128403-m03 already exists in multinode-128403-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-128403-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-128403-m03: (1.932562899s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.79s)

                                                
                                    
x
+
TestPreload (133.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-973743 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0914 01:24:02.896552  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-973743 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m41.555914523s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-973743 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-973743 image pull gcr.io/k8s-minikube/busybox: (3.193709778s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-973743
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-973743: (5.797396672s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-973743 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-973743 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.073829561s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-973743 image list
helpers_test.go:175: Cleaning up "test-preload-973743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-973743
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-973743: (2.658081824s)
--- PASS: TestPreload (133.58s)

                                                
                                    
x
+
TestScheduledStopUnix (104.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-821029 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-821029 --memory=2048 --driver=docker  --container-runtime=crio: (28.609359041s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-821029 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-821029 -n scheduled-stop-821029
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-821029 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-821029 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-821029 -n scheduled-stop-821029
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-821029
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-821029 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0914 01:27:26.630818  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-821029
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-821029: exit status 7 (69.52667ms)

                                                
                                                
-- stdout --
	scheduled-stop-821029
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-821029 -n scheduled-stop-821029
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-821029 -n scheduled-stop-821029: exit status 7 (66.221088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-821029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-821029
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-821029: (4.455660384s)
--- PASS: TestScheduledStopUnix (104.66s)

                                                
                                    
x
+
TestInsufficientStorage (11.24s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-046836 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-046836 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.756497284s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b08d224-a672-4ff2-9d53-06e831414c8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-046836] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"19ed4d01-f79a-4e32-b889-5275f21b6115","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"9151b7e2-b525-418a-9d32-3b7baa20762a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f75db472-e493-4939-aca7-39ac8a33bd04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig"}}
	{"specversion":"1.0","id":"9dcfea48-9ad2-4925-9f73-75a1c4d50fb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube"}}
	{"specversion":"1.0","id":"3db7be09-bf87-412f-8d41-3932a43dc68b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8fc646b2-b894-460d-92e3-b2bdc7d33a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ac1e5a65-55ed-4127-9e53-e1f5e527661e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fcfe6a51-f0ff-49f6-ba79-0d9e0ae11a46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"869e4379-3c56-4ec1-9e5c-ec13df19a611","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"87793d12-2301-4bfc-a5b1-6280c5a4307b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"81b1ad7a-524b-45f6-a4d6-c03fe5df789b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-046836\" primary control-plane node in \"insufficient-storage-046836\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"152e9e0f-0e2b-4384-8184-f12f620e302f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726243947-19640 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8353070e-e2d9-4ba5-8a48-322b011faccf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8a53cd1-01a0-419a-a4f9-45fc20cb1b25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-046836 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-046836 --output=json --layout=cluster: exit status 7 (278.013437ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-046836","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-046836","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 01:27:41.316875 1014442 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-046836" does not appear in /home/jenkins/minikube-integration/19640-868698/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-046836 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-046836 --output=json --layout=cluster: exit status 7 (282.762128ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-046836","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-046836","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 01:27:41.602071 1014502 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-046836" does not appear in /home/jenkins/minikube-integration/19640-868698/kubeconfig
	E0914 01:27:41.612435 1014502 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/insufficient-storage-046836/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-046836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-046836
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-046836: (1.922597352s)
--- PASS: TestInsufficientStorage (11.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1949478549 start -p running-upgrade-467525 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1949478549 start -p running-upgrade-467525 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.251747453s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-467525 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0914 01:34:02.897740  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-467525 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.247429224s)
helpers_test.go:175: Cleaning up "running-upgrade-467525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-467525
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-467525: (2.832409458s)
--- PASS: TestRunningBinaryUpgrade (85.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (473.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-042406 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-042406 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m15.628904794s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-042406
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-042406: (1.261448006s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-042406 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-042406 status --format={{.Host}}: exit status 7 (73.496754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-042406 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-042406 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.171066253s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-042406 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-042406 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-042406 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (110.577132ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-042406] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-042406
	    minikube start -p kubernetes-upgrade-042406 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0424062 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-042406 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-042406 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-042406 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m58.280189475s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-042406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-042406
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-042406: (2.321708635s)
--- PASS: TestKubernetesUpgrade (473.97s)

                                                
                                    
x
+
TestMissingContainerUpgrade (166.28s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
E0914 01:29:02.896533  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.356176670 start -p missing-upgrade-427435 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.356176670 start -p missing-upgrade-427435 --memory=2200 --driver=docker  --container-runtime=crio: (1m36.941935047s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-427435
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-427435: (10.444630717s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-427435
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-427435 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-427435 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.692116166s)
helpers_test.go:175: Cleaning up "missing-upgrade-427435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-427435
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-427435: (2.030905238s)
--- PASS: TestMissingContainerUpgrade (166.28s)

                                                
                                    
x
+
TestPause/serial/Start (59.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-649306 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-649306 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (59.552701958s)
--- PASS: TestPause/serial/Start (59.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-928282 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-928282 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (101.403916ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-928282] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-928282 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-928282 --driver=docker  --container-runtime=crio: (38.759712935s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-928282 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-928282 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-928282 --no-kubernetes --driver=docker  --container-runtime=crio: (17.244367755s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-928282 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-928282 status -o json: exit status 2 (322.439947ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-928282","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-928282
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-928282: (2.024068216s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-928282 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-928282 --no-kubernetes --driver=docker  --container-runtime=crio: (6.605601097s)
--- PASS: TestNoKubernetes/serial/Start (6.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.72s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-649306 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-649306 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.709806179s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-928282 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-928282 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.791133ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-928282
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-928282: (1.22934371s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-928282 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-928282 --driver=docker  --container-runtime=crio: (8.29392194s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-928282 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-928282 "sudo systemctl is-active --quiet service kubelet": exit status 1 (376.744689ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-649306 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-649306 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-649306 --output=json --layout=cluster: exit status 2 (384.049447ms)

                                                
                                                
-- stdout --
	{"Name":"pause-649306","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-649306","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-649306 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.06s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-649306 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-649306 --alsologtostderr -v=5: (1.062508649s)
--- PASS: TestPause/serial/PauseAgain (1.06s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.71s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-649306 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-649306 --alsologtostderr -v=5: (2.707911648s)
--- PASS: TestPause/serial/DeletePaused (2.71s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.12s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-649306
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-649306: exit status 1 (16.689768ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-649306: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (84.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1534000375 start -p stopped-upgrade-430281 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0914 01:32:05.968002  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1534000375 start -p stopped-upgrade-430281 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.560597247s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1534000375 -p stopped-upgrade-430281 stop
E0914 01:32:26.630272  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1534000375 -p stopped-upgrade-430281 stop: (2.683184292s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-430281 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-430281 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.272825163s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (84.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-430281
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-430281: (1.321702436s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-882963 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-882963 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (184.152038ms)

                                                
                                                
-- stdout --
	* [false-882963] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 01:35:27.079267 1053845 out.go:345] Setting OutFile to fd 1 ...
	I0914 01:35:27.079456 1053845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:35:27.079486 1053845 out.go:358] Setting ErrFile to fd 2...
	I0914 01:35:27.079507 1053845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:35:27.079780 1053845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-868698/.minikube/bin
	I0914 01:35:27.080230 1053845 out.go:352] Setting JSON to false
	I0914 01:35:27.081156 1053845 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":19071,"bootTime":1726258656,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0914 01:35:27.081286 1053845 start.go:139] virtualization:  
	I0914 01:35:27.084537 1053845 out.go:177] * [false-882963] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 01:35:27.087941 1053845 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 01:35:27.087971 1053845 notify.go:220] Checking for updates...
	I0914 01:35:27.090663 1053845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 01:35:27.093343 1053845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-868698/kubeconfig
	I0914 01:35:27.095957 1053845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-868698/.minikube
	I0914 01:35:27.098587 1053845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 01:35:27.101145 1053845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 01:35:27.104320 1053845 config.go:182] Loaded profile config "kubernetes-upgrade-042406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0914 01:35:27.104431 1053845 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 01:35:27.133881 1053845 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 01:35:27.134034 1053845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 01:35:27.199373 1053845 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-14 01:35:27.189113893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 01:35:27.199485 1053845 docker.go:318] overlay module found
	I0914 01:35:27.202400 1053845 out.go:177] * Using the docker driver based on user configuration
	I0914 01:35:27.205075 1053845 start.go:297] selected driver: docker
	I0914 01:35:27.205097 1053845 start.go:901] validating driver "docker" against <nil>
	I0914 01:35:27.205111 1053845 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 01:35:27.208747 1053845 out.go:201] 
	W0914 01:35:27.211288 1053845 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0914 01:35:27.213853 1053845 out.go:201] 

                                                
                                                
** /stderr **
E0914 01:35:29.694892  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:88: 
----------------------- debugLogs start: false-882963 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-882963" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 01:35:07 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-042406
contexts:
- context:
cluster: kubernetes-upgrade-042406
extensions:
- extension:
last-update: Sat, 14 Sep 2024 01:35:07 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-042406
name: kubernetes-upgrade-042406
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-042406
user:
client-certificate: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/kubernetes-upgrade-042406/client.crt
client-key: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/kubernetes-upgrade-042406/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-882963

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-882963"

                                                
                                                
----------------------- debugLogs end: false-882963 [took: 3.246459157s] --------------------------------
helpers_test.go:175: Cleaning up "false-882963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-882963
--- PASS: TestNetworkPlugins/group/false (3.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (183.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-517282 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0914 01:39:02.896754  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-517282 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m3.141650091s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (183.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-586738 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-586738 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m4.783671834s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-517282 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bf2ee94a-2eb0-4304-90aa-9a38b1184788] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bf2ee94a-2eb0-4304-90aa-9a38b1184788] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003295416s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-517282 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-517282 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-517282 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.159595835s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-517282 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-517282 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-517282 --alsologtostderr -v=3: (12.059102526s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-586738 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a8288035-56c6-4c8a-8131-6e1acf71a1b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a8288035-56c6-4c8a-8131-6e1acf71a1b8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004821752s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-586738 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-517282 -n old-k8s-version-517282
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-517282 -n old-k8s-version-517282: exit status 7 (78.513848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-517282 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (148.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-517282 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-517282 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m27.729842581s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-517282 -n old-k8s-version-517282
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (148.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-586738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-586738 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-586738 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-586738 --alsologtostderr -v=3: (12.087153559s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-586738 -n no-preload-586738
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-586738 -n no-preload-586738: exit status 7 (76.633718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-586738 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-586738 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 01:42:26.630472  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-586738 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (5m35.262015447s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-586738 -n no-preload-586738
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-n48kf" [405f071a-bd08-42f8-be84-bb264537d54d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006175834s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-n48kf" [405f071a-bd08-42f8-be84-bb264537d54d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004307493s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-517282 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-517282 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-517282 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-517282 -n old-k8s-version-517282
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-517282 -n old-k8s-version-517282: exit status 2 (310.467724ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-517282 -n old-k8s-version-517282
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-517282 -n old-k8s-version-517282: exit status 2 (322.299656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-517282 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-517282 -n old-k8s-version-517282
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-517282 -n old-k8s-version-517282
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-469298 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 01:44:02.896879  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-469298 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (50.352999157s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-469298 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f3ea4bb0-c2a7-4c5a-ba42-82e695707d72] Pending
helpers_test.go:344: "busybox" [f3ea4bb0-c2a7-4c5a-ba42-82e695707d72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f3ea4bb0-c2a7-4c5a-ba42-82e695707d72] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.035147199s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-469298 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-469298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-469298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.136155097s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-469298 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-469298 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-469298 --alsologtostderr -v=3: (12.060820323s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-469298 -n embed-certs-469298
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-469298 -n embed-certs-469298: exit status 7 (76.958369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-469298 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-469298 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 01:45:48.598300  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:48.604678  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:48.616089  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:48.637585  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:48.679039  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:48.760543  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:48.922389  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:49.244185  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:49.885684  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:51.167412  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:53.729654  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:45:58.851819  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:46:09.093822  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:46:29.575191  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-469298 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m27.034174261s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-469298 -n embed-certs-469298
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-thcmj" [649e320a-93b9-401a-9e53-aa2c16c5cde7] Running
E0914 01:47:10.536544  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003575475s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-thcmj" [649e320a-93b9-401a-9e53-aa2c16c5cde7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003722754s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-586738 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-586738 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-586738 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-586738 -n no-preload-586738
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-586738 -n no-preload-586738: exit status 2 (325.981117ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-586738 -n no-preload-586738
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-586738 -n no-preload-586738: exit status 2 (324.681766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-586738 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-586738 -n no-preload-586738
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-586738 -n no-preload-586738
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-987547 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 01:47:26.630113  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:48:32.458478  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-987547 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m19.964461048s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-987547 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [896eedec-2f22-4080-858a-6d120fea7d01] Pending
helpers_test.go:344: "busybox" [896eedec-2f22-4080-858a-6d120fea7d01] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 01:48:45.970017  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [896eedec-2f22-4080-858a-6d120fea7d01] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003674436s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-987547 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-987547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-987547 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-987547 --alsologtostderr -v=3
E0914 01:49:02.896800  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-987547 --alsologtostderr -v=3: (11.944760663s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-987547 -n default-k8s-diff-port-987547
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-987547 -n default-k8s-diff-port-987547: exit status 7 (71.420145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-987547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (296.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-987547 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-987547 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m55.99881361s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-987547 -n default-k8s-diff-port-987547
E0914 01:54:02.896821  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (296.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-86s5d" [4b7cfb4d-c73d-4301-a125-b7b337bf367a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003838743s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-86s5d" [4b7cfb4d-c73d-4301-a125-b7b337bf367a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003755595s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-469298 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-469298 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-469298 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-469298 -n embed-certs-469298
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-469298 -n embed-certs-469298: exit status 2 (364.451708ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-469298 -n embed-certs-469298
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-469298 -n embed-certs-469298: exit status 2 (370.684584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-469298 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-469298 -n embed-certs-469298
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-469298 -n embed-certs-469298
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (32.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-952824 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-952824 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (32.546674404s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (32.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-952824 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-952824 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.083853469s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-952824 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-952824 --alsologtostderr -v=3: (1.232503146s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-952824 -n newest-cni-952824
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-952824 -n newest-cni-952824: exit status 7 (75.887583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-952824 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-952824 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0914 01:50:48.597878  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-952824 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (14.976803913s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-952824 -n newest-cni-952824
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-952824 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-952824 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-952824 -n newest-cni-952824
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-952824 -n newest-cni-952824: exit status 2 (315.102979ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-952824 -n newest-cni-952824
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-952824 -n newest-cni-952824: exit status 2 (332.033828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-952824 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-952824 -n newest-cni-952824
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-952824 -n newest-cni-952824
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0914 01:51:06.299043  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:06.305413  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:06.316785  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:06.338121  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:06.379488  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:06.460897  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:06.623763  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:06.945522  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:07.587577  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:08.869291  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:11.430618  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:16.300177  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:16.552201  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:51:26.794021  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (48.503743398s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-882963 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-882963 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c2xcz" [0a4e2fc2-7851-4ab0-bc2b-b8f5d373ac48] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 01:51:47.275516  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-c2xcz" [0a4e2fc2-7851-4ab0-bc2b-b8f5d373ac48] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003640082s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-882963 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0914 01:52:26.630727  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:52:28.237697  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (55.655827172s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gsw9d" [9f02fcd3-5e0d-4fb1-8162-cc85980e3e1f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004544116s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-882963 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-882963 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d52sx" [3fc330d4-b988-419f-b017-bd71993d2568] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d52sx" [3fc330d4-b988-419f-b017-bd71993d2568] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003422253s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-882963 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0914 01:53:50.159684  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m10.02688507s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2vtl7" [3c019d61-1015-416d-86a1-c69d811c7483] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004020043s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2vtl7" [3c019d61-1015-416d-86a1-c69d811c7483] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006730054s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-987547 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-987547 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-987547 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-987547 --alsologtostderr -v=1: (1.27391768s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-987547 -n default-k8s-diff-port-987547
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-987547 -n default-k8s-diff-port-987547: exit status 2 (463.040967ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-987547 -n default-k8s-diff-port-987547
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-987547 -n default-k8s-diff-port-987547: exit status 2 (484.740045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-987547 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-987547 --alsologtostderr -v=1: (1.139589915s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-987547 -n default-k8s-diff-port-987547
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-987547 -n default-k8s-diff-port-987547
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.46s)
E0914 01:58:06.181975  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:10.467321  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:10.473689  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:10.485142  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:10.506502  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:10.547888  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:10.629807  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:10.791255  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:11.112544  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:11.753947  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:13.035620  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:15.597802  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:20.719104  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:30.961134  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:43.208518  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:43.214968  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:43.227021  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:43.248455  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:43.289982  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:43.371570  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:43.533188  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:43.854801  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:44.496835  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:45.779071  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:48.340887  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:51.442507  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/flannel-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:58:53.463039  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.233643098s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v5d8v" [0b999c15-5508-4127-b3ff-90f61a7a44f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004905008s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-882963 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-882963 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-frr8b" [61af4f69-418e-4431-a3e1-febc8d545ca9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-frr8b" [61af4f69-418e-4431-a3e1-febc8d545ca9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004366172s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-882963 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-882963 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-882963 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m9lvr" [cb6dbda1-1557-4181-9f89-289ed4f1c169] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m9lvr" [cb6dbda1-1557-4181-9f89-289ed4f1c169] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004524379s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-882963 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0914 01:55:48.598258  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/old-k8s-version-517282/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m29.419692258s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0914 01:56:06.298251  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:34.001903  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/no-preload-586738/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:44.243348  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:44.249699  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:44.261051  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:44.282472  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:44.323957  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:44.405436  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:44.566858  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:44.888260  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:45.530292  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:46.812115  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:49.373762  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:56:54.495842  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:57:04.737499  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m17.239057632s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-jw98v" [f6ae0158-c199-4edc-ac9b-2c892ad4cff1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004074213s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-882963 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-882963 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fl5mr" [53f73dfd-52e7-4ee2-a782-cd2595e54673] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fl5mr" [53f73dfd-52e7-4ee2-a782-cd2595e54673] Running
E0914 01:57:25.219795  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/auto-882963/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:57:26.630167  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/functional-963815/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004018096s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-882963 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-882963 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-njzzp" [b78a138a-9bda-40ed-a27e-1076c1775caf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-njzzp" [b78a138a-9bda-40ed-a27e-1076c1775caf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004048813s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-882963 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-882963 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-882963 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m8.794274764s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-882963 "pgrep -a kubelet"
E0914 01:59:02.896947  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/addons-885748/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-882963 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-brw84" [b77f9d5f-b69d-4d33-9766-7b65d1d68c8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 01:59:03.704448  874079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/default-k8s-diff-port-987547/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-brw84" [b77f9d5f-b69d-4d33-9766-7b65d1d68c8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.0049728s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-882963 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-882963 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-830102 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-830102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-830102
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-627376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-627376
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-882963 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-882963" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 01:35:07 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-042406
contexts:
- context:
cluster: kubernetes-upgrade-042406
extensions:
- extension:
last-update: Sat, 14 Sep 2024 01:35:07 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-042406
name: kubernetes-upgrade-042406
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-042406
user:
client-certificate: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/kubernetes-upgrade-042406/client.crt
client-key: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/kubernetes-upgrade-042406/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-882963

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-882963"

                                                
                                                
----------------------- debugLogs end: kubenet-882963 [took: 3.289512201s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-882963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-882963
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-882963 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-882963" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19640-868698/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 01:35:07 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-042406
contexts:
- context:
cluster: kubernetes-upgrade-042406
extensions:
- extension:
last-update: Sat, 14 Sep 2024 01:35:07 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-042406
name: kubernetes-upgrade-042406
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-042406
user:
client-certificate: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/kubernetes-upgrade-042406/client.crt
client-key: /home/jenkins/minikube-integration/19640-868698/.minikube/profiles/kubernetes-upgrade-042406/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-882963

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-882963" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-882963"

                                                
                                                
----------------------- debugLogs end: cilium-882963 [took: 3.653854088s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-882963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-882963
--- SKIP: TestNetworkPlugins/group/cilium (3.80s)

                                                
                                    
Copied to clipboard