Test Report: Docker_Linux_crio_arm64 19644

                    
                      c0eea096ace35e11d6c690a668e6718dc1bec60e:2024-09-15:36219
                    
                

Test fail (4/328)

Order failed test Duration
33 TestAddons/parallel/Registry 75.73
34 TestAddons/parallel/Ingress 153.94
36 TestAddons/parallel/MetricsServer 357.59
174 TestMultiControlPlane/serial/RestartCluster 127.36
x
+
TestAddons/parallel/Registry (75.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.726088ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-dvjjx" [f6332eec-8451-4a18-b1e4-899a9c98a398] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005285009s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pph5w" [5bfdb7e0-869e-409d-b185-7e7c0d0386d6] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005741065s
addons_test.go:342: (dbg) Run:  kubectl --context addons-078133 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-078133 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-078133 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.150985289s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-078133 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 ip
2024/09/15 06:51:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-078133
helpers_test.go:235: (dbg) docker inspect addons-078133:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde",
	        "Created": "2024-09-15T06:38:37.750228282Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2524440,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:38:37.907510174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1b71fa87733590eb4674b16f6945626ae533f3af37066893e3fd70eb9476268",
	        "ResolvConfPath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/hostname",
	        "HostsPath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/hosts",
	        "LogPath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde-json.log",
	        "Name": "/addons-078133",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-078133:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-078133",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420-init/diff:/var/lib/docker/overlay2/72792481ba3fe11d67c9c5bebed6121eb09dffa903ddf816dfb06e703f2d9d5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-078133",
	                "Source": "/var/lib/docker/volumes/addons-078133/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-078133",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-078133",
	                "name.minikube.sigs.k8s.io": "addons-078133",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c8d7e1050dbe4977f54b06c2224002186fb12e89f8d90b585337ed8c180c6bd",
	            "SandboxKey": "/var/run/docker/netns/0c8d7e1050db",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35748"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35749"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35752"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35750"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35751"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-078133": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "61892ade19da7989ac86d074df0c7f6076bb69e05029d3382c7c93eab898c4ab",
	                    "EndpointID": "5578870202f5d628a4be39c5ca56e5901d1922ca753b45b5f33733d1f214df65",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-078133",
	                        "7434fa99399a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-078133 -n addons-078133
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-078133 logs -n 25: (1.906308413s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-196406   | jenkins | v1.34.0 | 15 Sep 24 06:37 UTC |                     |
	|         | -p download-only-196406              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| delete  | -p download-only-196406              | download-only-196406   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| start   | -o=json --download-only              | download-only-600407   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | -p download-only-600407              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| delete  | -p download-only-600407              | download-only-600407   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| delete  | -p download-only-196406              | download-only-196406   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| delete  | -p download-only-600407              | download-only-600407   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| start   | --download-only -p                   | download-docker-842211 | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | download-docker-842211               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-842211            | download-docker-842211 | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| start   | --download-only -p                   | binary-mirror-404653   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | binary-mirror-404653                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33149               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-404653              | binary-mirror-404653   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| addons  | enable dashboard -p                  | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | addons-078133                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | addons-078133                        |                        |         |         |                     |                     |
	| start   | -p addons-078133 --wait=true         | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-078133 addons                 | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-078133 addons                 | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-078133 addons disable         | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | -p addons-078133                     |                        |         |         |                     |                     |
	| ip      | addons-078133 ip                     | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| addons  | addons-078133 addons disable         | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:38:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:38:12.787229 2523870 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:38:12.787649 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:38:12.787663 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:38:12.787669 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:38:12.787948 2523870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 06:38:12.788417 2523870 out.go:352] Setting JSON to false
	I0915 06:38:12.789322 2523870 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":51644,"bootTime":1726330649,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0915 06:38:12.789406 2523870 start.go:139] virtualization:  
	I0915 06:38:12.792757 2523870 out.go:177] * [addons-078133] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 06:38:12.795650 2523870 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:38:12.795696 2523870 notify.go:220] Checking for updates...
	I0915 06:38:12.799075 2523870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:38:12.801817 2523870 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 06:38:12.804477 2523870 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	I0915 06:38:12.807247 2523870 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 06:38:12.809885 2523870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:38:12.812844 2523870 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:38:12.839036 2523870 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:38:12.839177 2523870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:38:12.891358 2523870 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:38:12.881981504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:38:12.891480 2523870 docker.go:318] overlay module found
	I0915 06:38:12.895859 2523870 out.go:177] * Using the docker driver based on user configuration
	I0915 06:38:12.898575 2523870 start.go:297] selected driver: docker
	I0915 06:38:12.898603 2523870 start.go:901] validating driver "docker" against <nil>
	I0915 06:38:12.898625 2523870 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:38:12.899275 2523870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:38:12.952158 2523870 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:38:12.942889904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:38:12.952417 2523870 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:38:12.952666 2523870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:38:12.955396 2523870 out.go:177] * Using Docker driver with root privileges
	I0915 06:38:12.957978 2523870 cni.go:84] Creating CNI manager for ""
	I0915 06:38:12.958053 2523870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:38:12.958067 2523870 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 06:38:12.958154 2523870 start.go:340] cluster config:
	{Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:38:12.961074 2523870 out.go:177] * Starting "addons-078133" primary control-plane node in "addons-078133" cluster
	I0915 06:38:12.963705 2523870 cache.go:121] Beginning downloading kic base image for docker with crio
	I0915 06:38:12.966437 2523870 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:38:12.969038 2523870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:38:12.969094 2523870 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0915 06:38:12.969106 2523870 cache.go:56] Caching tarball of preloaded images
	I0915 06:38:12.969131 2523870 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:38:12.969194 2523870 preload.go:172] Found /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0915 06:38:12.969204 2523870 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 06:38:12.969614 2523870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/config.json ...
	I0915 06:38:12.969647 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/config.json: {Name:mkd56c679d1e8eeb25c48c5bb5d09233f14404e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:12.984555 2523870 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:38:12.984708 2523870 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:38:12.984732 2523870 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:38:12.984740 2523870 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:38:12.984748 2523870 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:38:12.984758 2523870 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:38:30.356936 2523870 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:38:30.356980 2523870 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:38:30.357009 2523870 start.go:360] acquireMachinesLock for addons-078133: {Name:mkd22383cf6e30905104727dd6882efae296baf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:38:30.357138 2523870 start.go:364] duration metric: took 107.583µs to acquireMachinesLock for "addons-078133"
	I0915 06:38:30.357171 2523870 start.go:93] Provisioning new machine with config: &{Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:38:30.357256 2523870 start.go:125] createHost starting for "" (driver="docker")
	I0915 06:38:30.358886 2523870 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 06:38:30.359147 2523870 start.go:159] libmachine.API.Create for "addons-078133" (driver="docker")
	I0915 06:38:30.359182 2523870 client.go:168] LocalClient.Create starting
	I0915 06:38:30.359309 2523870 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem
	I0915 06:38:31.028935 2523870 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem
	I0915 06:38:31.157412 2523870 cli_runner.go:164] Run: docker network inspect addons-078133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 06:38:31.173542 2523870 cli_runner.go:211] docker network inspect addons-078133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 06:38:31.173630 2523870 network_create.go:284] running [docker network inspect addons-078133] to gather additional debugging logs...
	I0915 06:38:31.173652 2523870 cli_runner.go:164] Run: docker network inspect addons-078133
	W0915 06:38:31.189395 2523870 cli_runner.go:211] docker network inspect addons-078133 returned with exit code 1
	I0915 06:38:31.189428 2523870 network_create.go:287] error running [docker network inspect addons-078133]: docker network inspect addons-078133: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-078133 not found
	I0915 06:38:31.189442 2523870 network_create.go:289] output of [docker network inspect addons-078133]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-078133 not found
	
	** /stderr **
	I0915 06:38:31.189539 2523870 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:38:31.205841 2523870 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001792940}
	I0915 06:38:31.205885 2523870 network_create.go:124] attempt to create docker network addons-078133 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 06:38:31.205944 2523870 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-078133 addons-078133
	I0915 06:38:31.304079 2523870 network_create.go:108] docker network addons-078133 192.168.49.0/24 created
	I0915 06:38:31.304113 2523870 kic.go:121] calculated static IP "192.168.49.2" for the "addons-078133" container
	I0915 06:38:31.304203 2523870 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 06:38:31.322468 2523870 cli_runner.go:164] Run: docker volume create addons-078133 --label name.minikube.sigs.k8s.io=addons-078133 --label created_by.minikube.sigs.k8s.io=true
	I0915 06:38:31.345040 2523870 oci.go:103] Successfully created a docker volume addons-078133
	I0915 06:38:31.345137 2523870 cli_runner.go:164] Run: docker run --rm --name addons-078133-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-078133 --entrypoint /usr/bin/test -v addons-078133:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 06:38:33.575685 2523870 cli_runner.go:217] Completed: docker run --rm --name addons-078133-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-078133 --entrypoint /usr/bin/test -v addons-078133:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (2.230494087s)
	I0915 06:38:33.575720 2523870 oci.go:107] Successfully prepared a docker volume addons-078133
	I0915 06:38:33.575744 2523870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:38:33.575763 2523870 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 06:38:33.575830 2523870 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-078133:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 06:38:37.682758 2523870 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-078133:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.10688552s)
	I0915 06:38:37.682789 2523870 kic.go:203] duration metric: took 4.107023149s to extract preloaded images to volume ...
	W0915 06:38:37.682941 2523870 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0915 06:38:37.683057 2523870 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 06:38:37.735978 2523870 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-078133 --name addons-078133 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-078133 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-078133 --network addons-078133 --ip 192.168.49.2 --volume addons-078133:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 06:38:38.073869 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Running}}
	I0915 06:38:38.096611 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:38:38.117014 2523870 cli_runner.go:164] Run: docker exec addons-078133 stat /var/lib/dpkg/alternatives/iptables
	I0915 06:38:38.193401 2523870 oci.go:144] the created container "addons-078133" has a running status.
	I0915 06:38:38.193429 2523870 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa...
	I0915 06:38:40.103212 2523870 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 06:38:40.124321 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:38:40.145609 2523870 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 06:38:40.145635 2523870 kic_runner.go:114] Args: [docker exec --privileged addons-078133 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 06:38:40.201133 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:38:40.223083 2523870 machine.go:93] provisionDockerMachine start ...
	I0915 06:38:40.223185 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:40.248426 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:40.248710 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:40.248727 2523870 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:38:40.384623 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-078133
	
	I0915 06:38:40.384649 2523870 ubuntu.go:169] provisioning hostname "addons-078133"
	I0915 06:38:40.384719 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:40.402539 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:40.402807 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:40.402827 2523870 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-078133 && echo "addons-078133" | sudo tee /etc/hostname
	I0915 06:38:40.553443 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-078133
	
	I0915 06:38:40.553586 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:40.571125 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:40.571387 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:40.571403 2523870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-078133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-078133/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-078133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:38:40.709939 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:38:40.709969 2523870 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-2517725/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-2517725/.minikube}
	I0915 06:38:40.710052 2523870 ubuntu.go:177] setting up certificates
	I0915 06:38:40.710065 2523870 provision.go:84] configureAuth start
	I0915 06:38:40.710167 2523870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-078133
	I0915 06:38:40.728157 2523870 provision.go:143] copyHostCerts
	I0915 06:38:40.728258 2523870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem (1082 bytes)
	I0915 06:38:40.728439 2523870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem (1123 bytes)
	I0915 06:38:40.728531 2523870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem (1675 bytes)
	I0915 06:38:40.728606 2523870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem org=jenkins.addons-078133 san=[127.0.0.1 192.168.49.2 addons-078133 localhost minikube]
	I0915 06:38:42.353273 2523870 provision.go:177] copyRemoteCerts
	I0915 06:38:42.353353 2523870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:38:42.353400 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.373293 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:42.471278 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 06:38:42.497795 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:38:42.522600 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 06:38:42.547736 2523870 provision.go:87] duration metric: took 1.83765139s to configureAuth
	I0915 06:38:42.547820 2523870 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:38:42.548046 2523870 config.go:182] Loaded profile config "addons-078133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:38:42.548166 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.565534 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:42.565797 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:42.565821 2523870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 06:38:42.807672 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 06:38:42.807751 2523870 machine.go:96] duration metric: took 2.584641806s to provisionDockerMachine
	I0915 06:38:42.807788 2523870 client.go:171] duration metric: took 12.44858555s to LocalClient.Create
	I0915 06:38:42.807845 2523870 start.go:167] duration metric: took 12.448698434s to libmachine.API.Create "addons-078133"
	I0915 06:38:42.807872 2523870 start.go:293] postStartSetup for "addons-078133" (driver="docker")
	I0915 06:38:42.807911 2523870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:38:42.808014 2523870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:38:42.808114 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.826066 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:42.926144 2523870 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:38:42.930078 2523870 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:38:42.930114 2523870 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:38:42.930124 2523870 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:38:42.930131 2523870 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:38:42.930144 2523870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/addons for local assets ...
	I0915 06:38:42.930220 2523870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/files for local assets ...
	I0915 06:38:42.930252 2523870 start.go:296] duration metric: took 122.36099ms for postStartSetup
	I0915 06:38:42.930585 2523870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-078133
	I0915 06:38:42.948043 2523870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/config.json ...
	I0915 06:38:42.948387 2523870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:38:42.948443 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.965578 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:43.062057 2523870 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:38:43.066878 2523870 start.go:128] duration metric: took 12.709604826s to createHost
	I0915 06:38:43.066945 2523870 start.go:83] releasing machines lock for "addons-078133", held for 12.709793154s
	I0915 06:38:43.067058 2523870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-078133
	I0915 06:38:43.084231 2523870 ssh_runner.go:195] Run: cat /version.json
	I0915 06:38:43.084291 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:43.084556 2523870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:38:43.084638 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:43.110679 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:43.113521 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:43.205306 2523870 ssh_runner.go:195] Run: systemctl --version
	I0915 06:38:43.331819 2523870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 06:38:43.475451 2523870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:38:43.479654 2523870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:38:43.503032 2523870 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0915 06:38:43.503135 2523870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:38:43.549259 2523870 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0915 06:38:43.549327 2523870 start.go:495] detecting cgroup driver to use...
	I0915 06:38:43.549376 2523870 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:38:43.549460 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 06:38:43.568882 2523870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 06:38:43.581182 2523870 docker.go:217] disabling cri-docker service (if available) ...
	I0915 06:38:43.581292 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 06:38:43.595995 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 06:38:43.611893 2523870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 06:38:43.708103 2523870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 06:38:43.812378 2523870 docker.go:233] disabling docker service ...
	I0915 06:38:43.812466 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 06:38:43.833320 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 06:38:43.845521 2523870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 06:38:43.943839 2523870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 06:38:44.039910 2523870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 06:38:44.052271 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:38:44.069425 2523870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 06:38:44.069497 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.079718 2523870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 06:38:44.079845 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.090489 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.100780 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.111161 2523870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:38:44.120858 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.131104 2523870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.148858 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.159069 2523870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:38:44.168402 2523870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:38:44.177003 2523870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:38:44.265072 2523870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 06:38:44.374011 2523870 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 06:38:44.374133 2523870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 06:38:44.378540 2523870 start.go:563] Will wait 60s for crictl version
	I0915 06:38:44.378656 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:38:44.382546 2523870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:38:44.424234 2523870 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0915 06:38:44.424349 2523870 ssh_runner.go:195] Run: crio --version
	I0915 06:38:44.475232 2523870 ssh_runner.go:195] Run: crio --version
	I0915 06:38:44.519124 2523870 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0915 06:38:44.521747 2523870 cli_runner.go:164] Run: docker network inspect addons-078133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:38:44.537582 2523870 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 06:38:44.541419 2523870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:38:44.552857 2523870 kubeadm.go:883] updating cluster {Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:38:44.552984 2523870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:38:44.553046 2523870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:38:44.633055 2523870 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:38:44.633083 2523870 crio.go:433] Images already preloaded, skipping extraction
	I0915 06:38:44.633143 2523870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:38:44.673366 2523870 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:38:44.673388 2523870 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:38:44.673397 2523870 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0915 06:38:44.673491 2523870 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-078133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:38:44.673581 2523870 ssh_runner.go:195] Run: crio config
	I0915 06:38:44.732765 2523870 cni.go:84] Creating CNI manager for ""
	I0915 06:38:44.732858 2523870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:38:44.732877 2523870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:38:44.732902 2523870 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-078133 NodeName:addons-078133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:38:44.733049 2523870 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-078133"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:38:44.733130 2523870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:38:44.741946 2523870 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:38:44.742045 2523870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:38:44.750784 2523870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0915 06:38:44.770200 2523870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:38:44.789649 2523870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0915 06:38:44.808669 2523870 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:38:44.812327 2523870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:38:44.823008 2523870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:38:44.913291 2523870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:38:44.927747 2523870 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133 for IP: 192.168.49.2
	I0915 06:38:44.927778 2523870 certs.go:194] generating shared ca certs ...
	I0915 06:38:44.927795 2523870 certs.go:226] acquiring lock for ca certs: {Name:mk5e6b4b1562ab546f1aa57699f236200f49d7e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:44.928715 2523870 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key
	I0915 06:38:45.326164 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt ...
	I0915 06:38:45.326211 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt: {Name:mk5bc462617f9659ba52a2152c2f6ee2c4afd336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.326491 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key ...
	I0915 06:38:45.326511 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key: {Name:mke6fb53bd94c120122c79adc8bb1635818a4c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.326662 2523870 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key
	I0915 06:38:45.743346 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt ...
	I0915 06:38:45.743380 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt: {Name:mk061dad5fc3f04b4c5728856758e4e719a722f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.743581 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key ...
	I0915 06:38:45.743595 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key: {Name:mk8f4151cf3bb4e60b32b8767dc2cf5cf44a4505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.743681 2523870 certs.go:256] generating profile certs ...
	I0915 06:38:45.743744 2523870 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.key
	I0915 06:38:45.743762 2523870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt with IP's: []
	I0915 06:38:46.183135 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt ...
	I0915 06:38:46.183178 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: {Name:mkf0bebdecf567120b50e3d4771ed97fb5f77b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:46.184171 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.key ...
	I0915 06:38:46.184189 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.key: {Name:mkae22a5721ba63055014519e5295d510f1c607b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:46.184290 2523870 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b
	I0915 06:38:46.184313 2523870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 06:38:47.375989 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b ...
	I0915 06:38:47.376029 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b: {Name:mkbb0cbab611271bcaa81d025cb58e0f49d6b725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:47.376266 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b ...
	I0915 06:38:47.376282 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b: {Name:mk44cadca365ce4b4475fd5ecbd0d3a7ab4a5e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:47.376377 2523870 certs.go:381] copying /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b -> /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt
	I0915 06:38:47.376469 2523870 certs.go:385] copying /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b -> /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key
	I0915 06:38:47.376532 2523870 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key
	I0915 06:38:47.376553 2523870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt with IP's: []
	I0915 06:38:48.296446 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt ...
	I0915 06:38:48.296479 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt: {Name:mk03e5126ebac87175cd074a3278a221669ecd43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:48.296678 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key ...
	I0915 06:38:48.296694 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key: {Name:mk184d4436eb1531806b2bfcf3dbee00f090f348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:48.296914 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 06:38:48.296959 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem (1082 bytes)
	I0915 06:38:48.296989 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:38:48.297016 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem (1675 bytes)
	I0915 06:38:48.297633 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:38:48.326882 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:38:48.352922 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:38:48.378019 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 06:38:48.403101 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:38:48.427999 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 06:38:48.452962 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:38:48.477908 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 06:38:48.503859 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:38:48.530602 2523870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:38:48.549981 2523870 ssh_runner.go:195] Run: openssl version
	I0915 06:38:48.555953 2523870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:38:48.566111 2523870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:38:48.569738 2523870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:38 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:38:48.569808 2523870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:38:48.577078 2523870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:38:48.587122 2523870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:38:48.590775 2523870 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:38:48.590821 2523870 kubeadm.go:392] StartCluster: {Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:38:48.590906 2523870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 06:38:48.590965 2523870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 06:38:48.629289 2523870 cri.go:89] found id: ""
	I0915 06:38:48.629429 2523870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:38:48.638918 2523870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:38:48.648246 2523870 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 06:38:48.648316 2523870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:38:48.657387 2523870 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:38:48.657405 2523870 kubeadm.go:157] found existing configuration files:
	
	I0915 06:38:48.657462 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:38:48.666518 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:38:48.666640 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:38:48.675439 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:38:48.684448 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:38:48.684566 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:38:48.693351 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:38:48.702264 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:38:48.702338 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:38:48.711186 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:38:48.720567 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:38:48.720649 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:38:48.730182 2523870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 06:38:48.780919 2523870 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:38:48.781052 2523870 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:38:48.802135 2523870 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0915 06:38:48.802289 2523870 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0915 06:38:48.802372 2523870 kubeadm.go:310] OS: Linux
	I0915 06:38:48.802466 2523870 kubeadm.go:310] CGROUPS_CPU: enabled
	I0915 06:38:48.802552 2523870 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0915 06:38:48.802630 2523870 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0915 06:38:48.802710 2523870 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0915 06:38:48.802818 2523870 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0915 06:38:48.802915 2523870 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0915 06:38:48.803014 2523870 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0915 06:38:48.803111 2523870 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0915 06:38:48.803189 2523870 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0915 06:38:48.874483 2523870 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:38:48.874665 2523870 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:38:48.874796 2523870 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:38:48.883798 2523870 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:38:48.887479 2523870 out.go:235]   - Generating certificates and keys ...
	I0915 06:38:48.887581 2523870 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:38:48.887682 2523870 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:38:49.339220 2523870 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:38:49.759961 2523870 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:38:49.944078 2523870 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:38:50.140723 2523870 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:38:50.666643 2523870 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:38:50.666794 2523870 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-078133 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:38:51.163173 2523870 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:38:51.163312 2523870 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-078133 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:38:52.181466 2523870 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:38:53.099402 2523870 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:38:53.475256 2523870 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:38:53.475495 2523870 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:38:53.868399 2523870 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:38:54.581730 2523870 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:38:55.110775 2523870 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:38:55.547546 2523870 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:38:55.827561 2523870 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:38:55.828306 2523870 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:38:55.831902 2523870 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:38:55.835154 2523870 out.go:235]   - Booting up control plane ...
	I0915 06:38:55.835337 2523870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:38:55.835455 2523870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:38:55.836739 2523870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:38:55.846862 2523870 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:38:55.852654 2523870 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:38:55.852715 2523870 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:38:55.945745 2523870 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:38:55.945867 2523870 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:38:56.449913 2523870 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.018783ms
	I0915 06:38:56.450000 2523870 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:39:03.453388 2523870 kubeadm.go:310] [api-check] The API server is healthy after 7.001427516s
	I0915 06:39:03.470476 2523870 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:39:03.486771 2523870 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:39:03.522770 2523870 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:39:03.522970 2523870 kubeadm.go:310] [mark-control-plane] Marking the node addons-078133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:39:03.536015 2523870 kubeadm.go:310] [bootstrap-token] Using token: 4rqqjy.4t6rodzggmhhv6z7
	I0915 06:39:03.540612 2523870 out.go:235]   - Configuring RBAC rules ...
	I0915 06:39:03.540745 2523870 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:39:03.546080 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:39:03.556664 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:39:03.561376 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:39:03.565561 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:39:03.569472 2523870 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:39:03.858387 2523870 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:39:04.293335 2523870 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:39:04.857982 2523870 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:39:04.859195 2523870 kubeadm.go:310] 
	I0915 06:39:04.859277 2523870 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:39:04.859289 2523870 kubeadm.go:310] 
	I0915 06:39:04.859390 2523870 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:39:04.859410 2523870 kubeadm.go:310] 
	I0915 06:39:04.859436 2523870 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:39:04.859496 2523870 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:39:04.859547 2523870 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:39:04.859551 2523870 kubeadm.go:310] 
	I0915 06:39:04.859605 2523870 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:39:04.859610 2523870 kubeadm.go:310] 
	I0915 06:39:04.859656 2523870 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:39:04.859661 2523870 kubeadm.go:310] 
	I0915 06:39:04.859713 2523870 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:39:04.859787 2523870 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:39:04.859854 2523870 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:39:04.859859 2523870 kubeadm.go:310] 
	I0915 06:39:04.859942 2523870 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:39:04.860018 2523870 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:39:04.860024 2523870 kubeadm.go:310] 
	I0915 06:39:04.860106 2523870 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4rqqjy.4t6rodzggmhhv6z7 \
	I0915 06:39:04.860208 2523870 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f02174f41dc6c5be174745b50e9cc9798f9f759608b7a0f4d9403600d367dc26 \
	I0915 06:39:04.860228 2523870 kubeadm.go:310] 	--control-plane 
	I0915 06:39:04.860233 2523870 kubeadm.go:310] 
	I0915 06:39:04.860316 2523870 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:39:04.860321 2523870 kubeadm.go:310] 
	I0915 06:39:04.860401 2523870 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4rqqjy.4t6rodzggmhhv6z7 \
	I0915 06:39:04.860502 2523870 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f02174f41dc6c5be174745b50e9cc9798f9f759608b7a0f4d9403600d367dc26 
	I0915 06:39:04.863766 2523870 kubeadm.go:310] W0915 06:38:48.777179    1185 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:39:04.864101 2523870 kubeadm.go:310] W0915 06:38:48.777944    1185 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:39:04.864322 2523870 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0915 06:39:04.864429 2523870 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:39:04.864452 2523870 cni.go:84] Creating CNI manager for ""
	I0915 06:39:04.864461 2523870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:39:04.867489 2523870 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0915 06:39:04.870221 2523870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0915 06:39:04.874336 2523870 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0915 06:39:04.874362 2523870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0915 06:39:04.894284 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0915 06:39:05.208677 2523870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:39:05.208832 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:05.208913 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-078133 minikube.k8s.io/updated_at=2024_09_15T06_39_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-078133 minikube.k8s.io/primary=true
	I0915 06:39:05.363687 2523870 ops.go:34] apiserver oom_adj: -16
	I0915 06:39:05.363789 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:05.864408 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:06.363995 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:06.864868 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:07.364405 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:07.864339 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:08.364323 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:08.863944 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:09.038552 2523870 kubeadm.go:1113] duration metric: took 3.829784576s to wait for elevateKubeSystemPrivileges
	I0915 06:39:09.038581 2523870 kubeadm.go:394] duration metric: took 20.447764237s to StartCluster
	I0915 06:39:09.038600 2523870 settings.go:142] acquiring lock: {Name:mka250035ae8fe54edf72ffd2d620ea51b449138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:39:09.038726 2523870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 06:39:09.039111 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/kubeconfig: {Name:mkc3f194059147bb4fbadd341bbbabf67fee0985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:39:09.039939 2523870 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:39:09.040131 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:39:09.040325 2523870 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:39:09.040408 2523870 config.go:182] Loaded profile config "addons-078133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:39:09.040435 2523870 addons.go:69] Setting yakd=true in profile "addons-078133"
	I0915 06:39:09.040446 2523870 addons.go:69] Setting inspektor-gadget=true in profile "addons-078133"
	I0915 06:39:09.040451 2523870 addons.go:234] Setting addon yakd=true in "addons-078133"
	I0915 06:39:09.040456 2523870 addons.go:234] Setting addon inspektor-gadget=true in "addons-078133"
	I0915 06:39:09.040480 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.040485 2523870 addons.go:69] Setting cloud-spanner=true in profile "addons-078133"
	I0915 06:39:09.040495 2523870 addons.go:234] Setting addon cloud-spanner=true in "addons-078133"
	I0915 06:39:09.040508 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041050 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041482 2523870 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-078133"
	I0915 06:39:09.041560 2523870 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-078133"
	I0915 06:39:09.041613 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041647 2523870 addons.go:69] Setting metrics-server=true in profile "addons-078133"
	I0915 06:39:09.041912 2523870 addons.go:234] Setting addon metrics-server=true in "addons-078133"
	I0915 06:39:09.041934 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.042360 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.042974 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041662 2523870 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-078133"
	I0915 06:39:09.043422 2523870 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-078133"
	I0915 06:39:09.043458 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.044071 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.052905 2523870 out.go:177] * Verifying Kubernetes components...
	I0915 06:39:09.041670 2523870 addons.go:69] Setting registry=true in profile "addons-078133"
	I0915 06:39:09.053360 2523870 addons.go:234] Setting addon registry=true in "addons-078133"
	I0915 06:39:09.041677 2523870 addons.go:69] Setting storage-provisioner=true in profile "addons-078133"
	I0915 06:39:09.053594 2523870 addons.go:234] Setting addon storage-provisioner=true in "addons-078133"
	I0915 06:39:09.053698 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041685 2523870 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-078133"
	I0915 06:39:09.056926 2523870 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-078133"
	I0915 06:39:09.057295 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.062965 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041693 2523870 addons.go:69] Setting volcano=true in profile "addons-078133"
	I0915 06:39:09.065091 2523870 addons.go:234] Setting addon volcano=true in "addons-078133"
	I0915 06:39:09.065130 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.065593 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.063209 2523870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:39:09.040480 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041789 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041702 2523870 addons.go:69] Setting volumesnapshots=true in profile "addons-078133"
	I0915 06:39:09.085273 2523870 addons.go:234] Setting addon volumesnapshots=true in "addons-078133"
	I0915 06:39:09.085333 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.085846 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041796 2523870 addons.go:69] Setting gcp-auth=true in profile "addons-078133"
	I0915 06:39:09.086076 2523870 mustload.go:65] Loading cluster: addons-078133
	I0915 06:39:09.086239 2523870 config.go:182] Loaded profile config "addons-078133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:39:09.086465 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041801 2523870 addons.go:69] Setting default-storageclass=true in profile "addons-078133"
	I0915 06:39:09.094560 2523870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-078133"
	I0915 06:39:09.094904 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041806 2523870 addons.go:69] Setting ingress=true in profile "addons-078133"
	I0915 06:39:09.105001 2523870 addons.go:234] Setting addon ingress=true in "addons-078133"
	I0915 06:39:09.105055 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.105584 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041811 2523870 addons.go:69] Setting ingress-dns=true in profile "addons-078133"
	I0915 06:39:09.105828 2523870 addons.go:234] Setting addon ingress-dns=true in "addons-078133"
	I0915 06:39:09.105864 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.106291 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.063670 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.139706 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.157805 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.241029 2523870 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:39:09.244895 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:39:09.244991 2523870 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:39:09.245101 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.252566 2523870 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:39:09.255882 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:39:09.255913 2523870 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:39:09.255985 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.305949 2523870 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:39:09.309848 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:39:09.310085 2523870 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:39:09.310113 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:39:09.310186 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.322978 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:39:09.329149 2523870 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-078133"
	I0915 06:39:09.329212 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.329744 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.346286 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:39:09.349169 2523870 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:39:09.349337 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:39:09.349376 2523870 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:39:09.349484 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.354629 2523870 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:39:09.354704 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:39:09.354789 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.367623 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:39:09.389092 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:39:09.389347 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:39:09.389610 2523870 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:39:09.389626 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:39:09.389688 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	W0915 06:39:09.391591 2523870 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0915 06:39:09.391963 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.396501 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:39:09.398337 2523870 addons.go:234] Setting addon default-storageclass=true in "addons-078133"
	I0915 06:39:09.398383 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.398799 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.406062 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:39:09.406277 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 06:39:09.406914 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:39:09.411306 2523870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:39:09.411331 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:39:09.411398 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.432227 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:39:09.434825 2523870 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:39:09.435043 2523870 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:39:09.435065 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:39:09.435134 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.437472 2523870 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:39:09.437496 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:39:09.437566 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.453082 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:39:09.457762 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:39:09.462413 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:39:09.468969 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:39:09.471555 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:39:09.471593 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:39:09.471669 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.482934 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:39:09.483223 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.484125 2523870 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:39:09.487259 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:39:09.487279 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:39:09.487344 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.520984 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.593269 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.596982 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.597062 2523870 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:39:09.599402 2523870 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:39:09.599428 2523870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:39:09.599501 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.602275 2523870 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:39:09.604798 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.607521 2523870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:39:09.607774 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:39:09.608168 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.621024 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.634782 2523870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:39:09.641915 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.644998 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.679310 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.699858 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.709617 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.725574 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.726343 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.967170 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:39:09.967196 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:39:10.051753 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:39:10.051784 2523870 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:39:10.123585 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:39:10.131017 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:39:10.155112 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:39:10.155140 2523870 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:39:10.162216 2523870 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:39:10.162242 2523870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:39:10.168215 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:39:10.200571 2523870 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:39:10.200648 2523870 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:39:10.204330 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:39:10.207613 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:39:10.207693 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:39:10.221132 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:39:10.221213 2523870 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:39:10.229090 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:39:10.232441 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:39:10.236135 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:39:10.253555 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:39:10.253632 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:39:10.314939 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:39:10.315016 2523870 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:39:10.319329 2523870 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:39:10.319406 2523870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:39:10.359489 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:39:10.359560 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:39:10.377308 2523870 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:39:10.377381 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:39:10.388486 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:39:10.388563 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:39:10.430613 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:39:10.430693 2523870 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:39:10.536291 2523870 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:39:10.536370 2523870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:39:10.546167 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:39:10.563456 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:39:10.563540 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:39:10.590878 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:39:10.595036 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:39:10.595130 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:39:10.651963 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:39:10.652038 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:39:10.780564 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:39:10.780649 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:39:10.783802 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:39:10.783880 2523870 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:39:10.787389 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:39:10.787467 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:39:10.855263 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:39:10.910709 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:39:10.910790 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:39:10.943539 2523870 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:39:10.943619 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:39:10.947004 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:39:10.947081 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:39:10.975982 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:39:10.976062 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:39:11.041384 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:39:11.041456 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:39:11.041859 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:39:11.041910 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:39:11.067123 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:39:11.169804 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:39:11.187844 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:39:11.187928 2523870 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:39:11.413987 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:39:11.414061 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:39:11.545139 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:39:11.545161 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:39:11.690868 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:39:11.690891 2523870 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:39:11.861968 2523870 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.378992448s)
	I0915 06:39:11.861995 2523870 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0915 06:39:11.863108 2523870 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.22829938s)
	I0915 06:39:11.863907 2523870 node_ready.go:35] waiting up to 6m0s for node "addons-078133" to be "Ready" ...
	I0915 06:39:11.925007 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:39:12.734191 2523870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-078133" context rescaled to 1 replicas
	I0915 06:39:13.816313 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.692684755s)
	I0915 06:39:13.816426 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.685386035s)
	I0915 06:39:13.816486 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.648202296s)
	I0915 06:39:13.948928 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:14.413876 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.209453947s)
	I0915 06:39:15.491159 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.261979832s)
	I0915 06:39:15.491246 2523870 addons.go:475] Verifying addon ingress=true in "addons-078133"
	I0915 06:39:15.491560 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.259043386s)
	I0915 06:39:15.491668 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.255460851s)
	I0915 06:39:15.491897 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.945656931s)
	I0915 06:39:15.491911 2523870 addons.go:475] Verifying addon metrics-server=true in "addons-078133"
	I0915 06:39:15.491940 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.900983898s)
	I0915 06:39:15.491947 2523870 addons.go:475] Verifying addon registry=true in "addons-078133"
	I0915 06:39:15.492354 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.637011622s)
	I0915 06:39:15.492468 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.425238269s)
	I0915 06:39:15.492570 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.322686637s)
	W0915 06:39:15.492507 2523870 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:39:15.492702 2523870 retry.go:31] will retry after 365.365183ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:39:15.494865 2523870 out.go:177] * Verifying registry addon...
	I0915 06:39:15.494883 2523870 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-078133 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:39:15.494996 2523870 out.go:177] * Verifying ingress addon...
	I0915 06:39:15.499126 2523870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:39:15.499146 2523870 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:39:15.508673 2523870 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:39:15.508703 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:15.509966 2523870 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:39:15.510037 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0915 06:39:15.524385 2523870 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0915 06:39:15.858832 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:39:15.879445 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.954334967s)
	I0915 06:39:15.879493 2523870 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-078133"
	I0915 06:39:15.882304 2523870 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:39:15.886174 2523870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:39:15.939391 2523870 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:39:15.939465 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:16.048275 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:16.059314 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:16.367719 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:16.390881 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:16.513275 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:16.521440 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:16.891066 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:17.005641 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:17.007645 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:17.130505 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.27161243s)
	I0915 06:39:17.390841 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:17.503165 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:17.504695 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:17.890914 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:18.008065 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:18.009583 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:18.371574 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:18.390782 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:18.506247 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:18.506438 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:18.560915 2523870 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:39:18.560997 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:18.579856 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:18.744915 2523870 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:39:18.764474 2523870 addons.go:234] Setting addon gcp-auth=true in "addons-078133"
	I0915 06:39:18.764523 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:18.765025 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:18.782156 2523870 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:39:18.782213 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:18.801456 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:18.904312 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:18.904653 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:39:18.907445 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:39:18.910534 2523870 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:39:18.910565 2523870 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:39:18.936545 2523870 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:39:18.936579 2523870 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:39:18.963991 2523870 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:39:18.964067 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:39:19.000463 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:39:19.016170 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:19.018516 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:19.395257 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:19.504167 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:19.505568 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:19.690148 2523870 addons.go:475] Verifying addon gcp-auth=true in "addons-078133"
	I0915 06:39:19.694850 2523870 out.go:177] * Verifying gcp-auth addon...
	I0915 06:39:19.714020 2523870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:39:19.735242 2523870 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:39:19.735265 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:19.889636 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:20.006962 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:20.015633 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:20.219761 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:20.390783 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:20.503049 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:20.503934 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:20.717230 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:20.867048 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:20.890525 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:21.008560 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:21.010633 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:21.218675 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:21.398063 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:21.503634 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:21.505331 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:21.718256 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:21.891285 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:22.004961 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:22.006610 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:22.219382 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:22.391119 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:22.505105 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:22.506699 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:22.718469 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:22.868045 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:22.891039 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:23.006023 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:23.007330 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:23.217716 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:23.392441 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:23.504360 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:23.505442 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:23.718077 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:23.890026 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:24.009952 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:24.011764 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:24.217196 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:24.390856 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:24.503823 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:24.504306 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:24.717265 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:24.890322 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:25.004815 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:25.009217 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:25.218931 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:25.368330 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:25.390248 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:25.504490 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:25.504784 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:25.718031 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:25.889897 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:26.006178 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:26.009321 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:26.217851 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:26.390260 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:26.503645 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:26.503929 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:26.717228 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:26.889966 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:27.005860 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:27.006534 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:27.217232 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:27.391379 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:27.503218 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:27.504180 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:27.717918 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:27.867581 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:27.890599 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:28.008041 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:28.010528 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:28.218488 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:28.390431 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:28.503223 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:28.503754 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:28.718274 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:28.890278 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:29.004652 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:29.006990 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:29.217428 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:29.390775 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:29.503442 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:29.504951 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:29.717347 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:29.867767 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:29.889736 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:30.013658 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:30.013836 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:30.219186 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:30.391799 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:30.503268 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:30.504148 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:30.717747 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:30.890714 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:31.004930 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:31.005992 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:31.217720 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:31.390558 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:31.503622 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:31.504583 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:31.718229 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:31.890555 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:32.008758 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:32.009715 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:32.217800 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:32.367710 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:32.389503 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:32.504290 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:32.504617 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:32.718358 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:32.890232 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:33.013792 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:33.014310 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:33.217772 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:33.389964 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:33.503854 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:33.504297 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:33.718265 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:33.890626 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:34.005812 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:34.007225 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:34.218580 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:34.368052 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:34.389929 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:34.502638 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:34.503613 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:34.718366 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:34.891557 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:35.009694 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:35.021653 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:35.218731 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:35.390461 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:35.504550 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:35.506436 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:35.718202 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:35.890352 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:36.006752 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:36.008736 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:36.217910 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:36.390208 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:36.503044 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:36.503488 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:36.717595 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:36.867872 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:36.890611 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:37.007512 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:37.008318 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:37.217196 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:37.389970 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:37.502759 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:37.503952 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:37.717068 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:37.890324 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:38.008794 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:38.009771 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:38.217829 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:38.389937 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:38.503592 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:38.504486 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:38.717991 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:38.890450 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:39.008193 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:39.009653 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:39.226065 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:39.367638 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:39.390621 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:39.507715 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:39.508472 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:39.718445 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:39.890449 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:40.011215 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:40.031551 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:40.218036 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:40.390520 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:40.506183 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:40.507671 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:40.718484 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:40.889891 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:41.006703 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:41.007677 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:41.217954 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:41.368038 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:41.390857 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:41.502948 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:41.503795 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:41.723269 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:41.890629 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:42.009905 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:42.010464 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:42.217795 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:42.390908 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:42.503860 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:42.504836 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:42.717714 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:42.890761 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:43.007858 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:43.008735 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:43.217902 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:43.389922 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:43.502784 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:43.503593 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:43.717585 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:43.868251 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:43.890507 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:44.014356 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:44.014574 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:44.218704 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:44.390683 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:44.503015 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:44.503922 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:44.717370 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:44.890339 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:45.006474 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:45.008151 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:45.218416 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:45.390283 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:45.503879 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:45.504683 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:45.717454 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:45.890475 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:46.008464 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:46.011999 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:46.217682 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:46.367996 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:46.390451 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:46.503110 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:46.504008 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:46.717277 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:46.890358 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:47.006411 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:47.007378 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:47.217355 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:47.390037 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:47.503022 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:47.503858 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:47.717276 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:47.890100 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:48.011525 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:48.014501 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:48.217881 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:48.390415 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:48.502868 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:48.503714 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:48.717603 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:48.868116 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:48.889580 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:49.007659 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:49.008613 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:49.221630 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:49.390355 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:49.503859 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:49.504764 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:49.717278 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:49.890162 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:50.016362 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:50.016914 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:50.218199 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:50.390287 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:50.503347 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:50.504044 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:50.717043 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:50.890485 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:51.049786 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:51.062794 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:51.224379 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:51.397876 2523870 node_ready.go:49] node "addons-078133" has status "Ready":"True"
	I0915 06:39:51.397903 2523870 node_ready.go:38] duration metric: took 39.533978864s for node "addons-078133" to be "Ready" ...
	I0915 06:39:51.397914 2523870 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:39:51.427264 2523870 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:39:51.427292 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:51.464114 2523870 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7vkbz" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:51.590510 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:51.591035 2523870 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:39:51.591054 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:51.769687 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:51.901853 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:52.030916 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:52.032462 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:52.223429 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:52.391680 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:52.523484 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:52.524528 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:52.718617 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:52.891172 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:52.971134 2523870 pod_ready.go:93] pod "coredns-7c65d6cfc9-7vkbz" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.971160 2523870 pod_ready.go:82] duration metric: took 1.507009842s for pod "coredns-7c65d6cfc9-7vkbz" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.971209 2523870 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.977562 2523870 pod_ready.go:93] pod "etcd-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.977605 2523870 pod_ready.go:82] duration metric: took 6.380539ms for pod "etcd-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.977622 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.984413 2523870 pod_ready.go:93] pod "kube-apiserver-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.984443 2523870 pod_ready.go:82] duration metric: took 6.771659ms for pod "kube-apiserver-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.984456 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.990371 2523870 pod_ready.go:93] pod "kube-controller-manager-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.990397 2523870 pod_ready.go:82] duration metric: took 5.931499ms for pod "kube-controller-manager-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.990414 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fjj4k" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.996392 2523870 pod_ready.go:93] pod "kube-proxy-fjj4k" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.996424 2523870 pod_ready.go:82] duration metric: took 6.001429ms for pod "kube-proxy-fjj4k" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.996438 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:53.009143 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:53.010564 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:53.218339 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:53.368479 2523870 pod_ready.go:93] pod "kube-scheduler-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:53.368505 2523870 pod_ready.go:82] duration metric: took 372.058726ms for pod "kube-scheduler-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:53.368517 2523870 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:53.391482 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:53.508086 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:53.509396 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:53.719334 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:53.893534 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:54.008069 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:54.009214 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:54.220473 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:54.393145 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:54.506031 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:54.515648 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:54.718589 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:54.892614 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:55.007453 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:55.010827 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:55.222250 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:55.376527 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:39:55.392570 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:55.506637 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:55.508411 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:55.718235 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:55.891769 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:56.006852 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:56.009587 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:56.219174 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:56.390762 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:56.504692 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:56.506044 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:56.718089 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:56.901935 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:57.005894 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:57.007119 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:57.218515 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:57.392369 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:57.506920 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:57.508332 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:57.717995 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:57.875345 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:39:57.892007 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:58.006101 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:58.006268 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:58.226454 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:58.392438 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:58.506852 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:58.507582 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:58.718390 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:58.893006 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:59.004892 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:59.007281 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:59.218349 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:59.391747 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:59.507785 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:59.511002 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:59.718650 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:59.876003 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:39:59.892455 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:00.007347 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:00.009528 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:00.245436 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:00.508623 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:00.535863 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:00.537735 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:00.723119 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:00.901726 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:01.012175 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:01.013228 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:01.223627 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:01.397325 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:01.508050 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:01.509577 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:01.719168 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:01.876338 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:01.893359 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:02.016637 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:02.019038 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:02.219910 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:02.392659 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:02.529881 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:02.531435 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:02.719132 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:02.893546 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:03.012685 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:03.014579 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:03.224218 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:03.391738 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:03.508749 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:03.512180 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:03.719109 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:03.876617 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:03.893892 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:04.012887 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:04.014341 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:04.218097 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:04.392063 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:04.503904 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:04.504946 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:04.717690 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:04.891182 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:05.010877 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:05.011628 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:05.217387 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:05.399458 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:05.505163 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:05.506344 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:05.721686 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:05.876868 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:05.893999 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:06.009105 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:06.010539 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:06.218863 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:06.391805 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:06.504869 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:06.505897 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:06.717807 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:06.900869 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:07.011645 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:07.012942 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:07.217184 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:07.391107 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:07.504957 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:07.505322 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:07.717633 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:07.899952 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:08.011925 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:08.013069 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:08.217268 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:08.376650 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:08.397803 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:08.505492 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:08.506686 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:08.718464 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:08.891562 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:09.005433 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:09.007473 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:09.218676 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:09.393023 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:09.504274 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:09.504893 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:09.720362 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:09.900991 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:10.009437 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:10.010607 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:10.217916 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:10.391420 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:10.503362 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:10.504726 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:10.718554 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:10.875439 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:10.891030 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:11.006830 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:11.007545 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:11.218297 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:11.394784 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:11.505674 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:11.507120 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:11.717797 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:11.892090 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:12.012833 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:12.014665 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:12.218750 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:12.391423 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:12.504227 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:12.505056 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:12.717972 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:12.891091 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:13.004369 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:13.006898 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:13.217462 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:13.375022 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:13.391234 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:13.505887 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:13.509132 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:13.719365 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:13.892337 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:14.027805 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:14.029543 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:14.218097 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:14.394284 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:14.503684 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:14.504768 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:14.720283 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:14.891679 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:15.005388 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:15.108689 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:15.218457 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:15.375762 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:15.392211 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:15.504886 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:15.505624 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:15.717476 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:15.891681 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:16.009431 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:16.012968 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:16.218788 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:16.391091 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:16.505725 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:16.508000 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:16.719209 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:16.893291 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:17.011839 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:17.012867 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:17.219510 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:17.376009 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:17.392084 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:17.506117 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:17.509472 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:17.718736 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:17.892359 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:18.011278 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:18.011976 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:18.218284 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:18.391739 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:18.504420 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:18.505593 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:18.718246 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:18.891814 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:19.009582 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:19.010144 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:19.217852 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:19.391270 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:19.505094 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:19.505450 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:19.717938 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:19.876031 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:19.892583 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:20.022672 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:20.023496 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:20.219111 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:20.391707 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:20.504488 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:20.505535 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:20.735971 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:20.894400 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:21.005148 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:21.006658 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:21.218083 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:21.392231 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:21.505987 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:21.507535 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:21.719497 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:21.876166 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:21.895827 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:22.005926 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:22.015854 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:22.218563 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:22.392508 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:22.505920 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:22.507345 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:22.721627 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:22.891650 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:23.007542 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:23.011624 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:23.218496 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:23.424380 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:23.517867 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:23.519670 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:23.717708 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:23.877493 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:23.892213 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:24.009293 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:24.010054 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:24.218495 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:24.391439 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:24.505968 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:24.507321 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:24.718282 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:24.892049 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:25.021077 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:25.027241 2523870 kapi.go:107] duration metric: took 1m9.528110217s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:40:25.217764 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:25.390797 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:25.503618 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:25.717901 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:25.893381 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:26.009074 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:26.217567 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:26.374885 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:26.391801 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:26.503999 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:26.722475 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:26.890983 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:27.006887 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:27.219513 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:27.392340 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:27.504077 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:27.718269 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:27.892904 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:28.004023 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:28.219042 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:28.376299 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:28.399220 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:28.504498 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:28.718964 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:28.896135 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:29.006026 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:29.218032 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:29.393178 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:29.509539 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:29.718139 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:29.893776 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:30.005062 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:30.234708 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:30.393094 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:30.505057 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:30.718540 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:30.876680 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:30.893933 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:31.008054 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:31.219075 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:31.404942 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:31.505691 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:31.718932 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:31.893105 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:32.009801 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:32.219037 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:32.393111 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:32.504180 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:32.719026 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:32.876996 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:32.892930 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:33.005692 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:33.217717 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:33.391361 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:33.504310 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:33.718712 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:33.891841 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:34.005309 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:34.219141 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:34.423022 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:34.503613 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:34.726243 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:34.896767 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:35.004767 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:35.218452 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:35.378703 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:35.398054 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:35.504269 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:35.719379 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:35.896417 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:36.020512 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:36.218661 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:36.393103 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:36.505162 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:36.718101 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:36.895403 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:37.007273 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:37.218042 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:37.392145 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:37.503483 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:37.718902 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:37.875591 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:37.891548 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:38.005969 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:38.217510 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:38.391997 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:38.503726 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:38.718614 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:38.891369 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:39.005328 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:39.217328 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:39.391927 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:39.504617 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:39.718749 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:39.876161 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:39.891185 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:40.004226 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:40.218071 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:40.392301 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:40.505556 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:40.717967 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:40.892236 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:41.005881 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:41.218764 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:41.395672 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:41.503746 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:41.719115 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:41.876921 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:41.895525 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:42.011166 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:42.218028 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:42.392438 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:42.503989 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:42.718426 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:42.891965 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:43.005470 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:43.218325 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:43.391674 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:43.503672 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:43.718546 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:43.891279 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:44.009592 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:44.218862 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:44.377134 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:44.391140 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:44.504636 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:44.718865 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:44.892732 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:45.005120 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:45.220362 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:45.393290 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:45.504799 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:45.719264 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:45.892303 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:46.010041 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:46.222170 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:46.392718 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:46.507034 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:46.719634 2523870 kapi.go:107] duration metric: took 1m27.005612282s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:40:46.721255 2523870 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-078133 cluster.
	I0915 06:40:46.722663 2523870 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:40:46.723801 2523870 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:40:46.876708 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:46.894513 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:47.005594 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:47.392485 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:47.504081 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:47.897917 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:48.005531 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:48.391420 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:48.503783 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:48.878884 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:48.893603 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:49.007483 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:49.391911 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:49.505584 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:49.891537 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:50.012368 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:50.392057 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:50.503606 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:50.891754 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:51.004331 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:51.379225 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:51.391873 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:51.504975 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:51.892942 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:52.069383 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:52.397630 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:52.504476 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:52.891313 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:53.011566 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:53.392684 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:53.504669 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:53.875903 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:53.891954 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:54.006138 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:54.392101 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:54.503774 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:54.899918 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:55.006756 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:55.392260 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:55.504130 2523870 kapi.go:107] duration metric: took 1m40.004978236s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:40:55.892947 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:56.382504 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:56.392491 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:56.924548 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:57.393779 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:57.891466 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:58.392642 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:58.877042 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:58.891963 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:59.391610 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:59.893537 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:00.397105 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:00.904885 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:01.375303 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:01.391382 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:01.892308 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:02.392116 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:02.894530 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:03.375597 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:03.392955 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:03.891747 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:04.399605 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:04.891765 2523870 kapi.go:107] duration metric: took 1m49.0055889s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:41:04.894260 2523870 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0915 06:41:04.895478 2523870 addons.go:510] duration metric: took 1m55.855150005s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0915 06:41:05.875469 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:08.377139 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:10.875168 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:11.380090 2523870 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"True"
	I0915 06:41:11.380127 2523870 pod_ready.go:82] duration metric: took 1m18.011601636s for pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace to be "Ready" ...
	I0915 06:41:11.380141 2523870 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-cwx62" in "kube-system" namespace to be "Ready" ...
	I0915 06:41:11.415635 2523870 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-cwx62" in "kube-system" namespace has status "Ready":"True"
	I0915 06:41:11.415662 2523870 pod_ready.go:82] duration metric: took 35.513361ms for pod "nvidia-device-plugin-daemonset-cwx62" in "kube-system" namespace to be "Ready" ...
	I0915 06:41:11.415685 2523870 pod_ready.go:39] duration metric: took 1m20.01772025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:41:11.415708 2523870 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:41:11.415741 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:41:11.415815 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:41:11.495394 2523870 cri.go:89] found id: "e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:11.495424 2523870 cri.go:89] found id: ""
	I0915 06:41:11.495434 2523870 logs.go:276] 1 containers: [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6]
	I0915 06:41:11.495517 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.499500 2523870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:41:11.499585 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:41:11.550559 2523870 cri.go:89] found id: "aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:11.550594 2523870 cri.go:89] found id: ""
	I0915 06:41:11.550603 2523870 logs.go:276] 1 containers: [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6]
	I0915 06:41:11.550667 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.554309 2523870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:41:11.554399 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:41:11.601798 2523870 cri.go:89] found id: "85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:11.601821 2523870 cri.go:89] found id: ""
	I0915 06:41:11.601829 2523870 logs.go:276] 1 containers: [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c]
	I0915 06:41:11.601888 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.605508 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:41:11.605625 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:41:11.647917 2523870 cri.go:89] found id: "9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:11.647991 2523870 cri.go:89] found id: ""
	I0915 06:41:11.648013 2523870 logs.go:276] 1 containers: [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159]
	I0915 06:41:11.648110 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.651911 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:41:11.652032 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:41:11.698154 2523870 cri.go:89] found id: "7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:11.698186 2523870 cri.go:89] found id: ""
	I0915 06:41:11.698195 2523870 logs.go:276] 1 containers: [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee]
	I0915 06:41:11.698256 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.701917 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:41:11.701995 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:41:11.746530 2523870 cri.go:89] found id: "fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:11.746597 2523870 cri.go:89] found id: ""
	I0915 06:41:11.746615 2523870 logs.go:276] 1 containers: [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1]
	I0915 06:41:11.746685 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.750359 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:41:11.750457 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:41:11.793770 2523870 cri.go:89] found id: "0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:11.793794 2523870 cri.go:89] found id: ""
	I0915 06:41:11.793802 2523870 logs.go:276] 1 containers: [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725]
	I0915 06:41:11.793884 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.797463 2523870 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:41:11.797492 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:41:11.992092 2523870 logs.go:123] Gathering logs for etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] ...
	I0915 06:41:11.992123 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:12.054295 2523870 logs.go:123] Gathering logs for kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] ...
	I0915 06:41:12.054337 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:12.107869 2523870 logs.go:123] Gathering logs for kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] ...
	I0915 06:41:12.107906 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:12.152727 2523870 logs.go:123] Gathering logs for container status ...
	I0915 06:41:12.152760 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:41:12.209277 2523870 logs.go:123] Gathering logs for kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] ...
	I0915 06:41:12.209313 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:12.282525 2523870 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:41:12.282570 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:41:12.379304 2523870 logs.go:123] Gathering logs for kubelet ...
	I0915 06:41:12.379387 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0915 06:41:12.452980 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028288    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.453256 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.453428 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.453641 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.453826 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.454053 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:12.488341 2523870 logs.go:123] Gathering logs for dmesg ...
	I0915 06:41:12.488390 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:41:12.506041 2523870 logs.go:123] Gathering logs for kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] ...
	I0915 06:41:12.506071 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:12.563059 2523870 logs.go:123] Gathering logs for coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] ...
	I0915 06:41:12.563096 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:12.606199 2523870 logs.go:123] Gathering logs for kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] ...
	I0915 06:41:12.606234 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:12.648655 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:12.648683 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0915 06:41:12.648741 2523870 out.go:270] X Problems detected in kubelet:
	W0915 06:41:12.648758 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.648765 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.648780 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.648787 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.648799 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:12.648833 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:12.648843 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:41:22.649917 2523870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:41:22.664122 2523870 api_server.go:72] duration metric: took 2m13.624140746s to wait for apiserver process to appear ...
	I0915 06:41:22.664149 2523870 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:41:22.664188 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:41:22.664251 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:41:22.715271 2523870 cri.go:89] found id: "e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:22.715298 2523870 cri.go:89] found id: ""
	I0915 06:41:22.715308 2523870 logs.go:276] 1 containers: [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6]
	I0915 06:41:22.715367 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.718981 2523870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:41:22.719054 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:41:22.758523 2523870 cri.go:89] found id: "aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:22.758548 2523870 cri.go:89] found id: ""
	I0915 06:41:22.758558 2523870 logs.go:276] 1 containers: [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6]
	I0915 06:41:22.758622 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.762372 2523870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:41:22.762450 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:41:22.803919 2523870 cri.go:89] found id: "85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:22.803939 2523870 cri.go:89] found id: ""
	I0915 06:41:22.803946 2523870 logs.go:276] 1 containers: [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c]
	I0915 06:41:22.804003 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.807829 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:41:22.807902 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:41:22.846386 2523870 cri.go:89] found id: "9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:22.846461 2523870 cri.go:89] found id: ""
	I0915 06:41:22.846477 2523870 logs.go:276] 1 containers: [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159]
	I0915 06:41:22.846550 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.850418 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:41:22.850502 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:41:22.894080 2523870 cri.go:89] found id: "7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:22.894105 2523870 cri.go:89] found id: ""
	I0915 06:41:22.894113 2523870 logs.go:276] 1 containers: [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee]
	I0915 06:41:22.894173 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.898275 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:41:22.898353 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:41:22.938696 2523870 cri.go:89] found id: "fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:22.938717 2523870 cri.go:89] found id: ""
	I0915 06:41:22.938725 2523870 logs.go:276] 1 containers: [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1]
	I0915 06:41:22.938785 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.942715 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:41:22.942798 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:41:22.990421 2523870 cri.go:89] found id: "0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:22.990492 2523870 cri.go:89] found id: ""
	I0915 06:41:22.990514 2523870 logs.go:276] 1 containers: [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725]
	I0915 06:41:22.990602 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.994406 2523870 logs.go:123] Gathering logs for kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] ...
	I0915 06:41:22.994433 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:23.073513 2523870 logs.go:123] Gathering logs for etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] ...
	I0915 06:41:23.073551 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:23.141989 2523870 logs.go:123] Gathering logs for kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] ...
	I0915 06:41:23.142067 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:23.197032 2523870 logs.go:123] Gathering logs for kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] ...
	I0915 06:41:23.197109 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:23.242720 2523870 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:41:23.242756 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:41:23.337137 2523870 logs.go:123] Gathering logs for container status ...
	I0915 06:41:23.337178 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:41:23.394824 2523870 logs.go:123] Gathering logs for kubelet ...
	I0915 06:41:23.394853 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0915 06:41:23.446249 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028288    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.446518 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.446688 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.446894 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.447080 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.447305 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:23.482115 2523870 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:41:23.482149 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:41:23.634605 2523870 logs.go:123] Gathering logs for coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] ...
	I0915 06:41:23.634636 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:23.675844 2523870 logs.go:123] Gathering logs for kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] ...
	I0915 06:41:23.675873 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:23.723363 2523870 logs.go:123] Gathering logs for kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] ...
	I0915 06:41:23.723398 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:23.797568 2523870 logs.go:123] Gathering logs for dmesg ...
	I0915 06:41:23.797657 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:41:23.816018 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:23.816047 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0915 06:41:23.816107 2523870 out.go:270] X Problems detected in kubelet:
	W0915 06:41:23.816120 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.816132 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.816144 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.816154 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.816160 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:23.816172 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:23.816178 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:41:33.817587 2523870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 06:41:33.825225 2523870 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 06:41:33.826245 2523870 api_server.go:141] control plane version: v1.31.1
	I0915 06:41:33.826278 2523870 api_server.go:131] duration metric: took 11.162120505s to wait for apiserver health ...
	I0915 06:41:33.826288 2523870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:41:33.826312 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:41:33.826381 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:41:33.865811 2523870 cri.go:89] found id: "e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:33.865838 2523870 cri.go:89] found id: ""
	I0915 06:41:33.865847 2523870 logs.go:276] 1 containers: [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6]
	I0915 06:41:33.865905 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.869614 2523870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:41:33.869702 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:41:33.907874 2523870 cri.go:89] found id: "aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:33.907899 2523870 cri.go:89] found id: ""
	I0915 06:41:33.907907 2523870 logs.go:276] 1 containers: [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6]
	I0915 06:41:33.907963 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.911687 2523870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:41:33.911762 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:41:33.951105 2523870 cri.go:89] found id: "85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:33.951128 2523870 cri.go:89] found id: ""
	I0915 06:41:33.951137 2523870 logs.go:276] 1 containers: [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c]
	I0915 06:41:33.951196 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.954918 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:41:33.955022 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:41:33.994550 2523870 cri.go:89] found id: "9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:33.994574 2523870 cri.go:89] found id: ""
	I0915 06:41:33.994583 2523870 logs.go:276] 1 containers: [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159]
	I0915 06:41:33.994643 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.998722 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:41:33.998797 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:41:34.039134 2523870 cri.go:89] found id: "7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:34.039159 2523870 cri.go:89] found id: ""
	I0915 06:41:34.039167 2523870 logs.go:276] 1 containers: [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee]
	I0915 06:41:34.039230 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:34.043267 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:41:34.043394 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:41:34.084090 2523870 cri.go:89] found id: "fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:34.084114 2523870 cri.go:89] found id: ""
	I0915 06:41:34.084123 2523870 logs.go:276] 1 containers: [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1]
	I0915 06:41:34.084176 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:34.087813 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:41:34.087891 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:41:34.132606 2523870 cri.go:89] found id: "0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:34.132631 2523870 cri.go:89] found id: ""
	I0915 06:41:34.132639 2523870 logs.go:276] 1 containers: [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725]
	I0915 06:41:34.132712 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:34.136498 2523870 logs.go:123] Gathering logs for kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] ...
	I0915 06:41:34.136526 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:34.183368 2523870 logs.go:123] Gathering logs for kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] ...
	I0915 06:41:34.183400 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:34.226908 2523870 logs.go:123] Gathering logs for kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] ...
	I0915 06:41:34.226942 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:34.320748 2523870 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:41:34.320790 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:41:34.423086 2523870 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:41:34.423130 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:41:34.576900 2523870 logs.go:123] Gathering logs for kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] ...
	I0915 06:41:34.576934 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:34.653698 2523870 logs.go:123] Gathering logs for etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] ...
	I0915 06:41:34.653736 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:34.704486 2523870 logs.go:123] Gathering logs for coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] ...
	I0915 06:41:34.704520 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:34.751429 2523870 logs.go:123] Gathering logs for kubelet ...
	I0915 06:41:34.751460 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0915 06:41:34.804369 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028288    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.804610 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.804777 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.804990 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.805174 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.805399 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:34.842270 2523870 logs.go:123] Gathering logs for dmesg ...
	I0915 06:41:34.842324 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:41:34.861474 2523870 logs.go:123] Gathering logs for kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] ...
	I0915 06:41:34.861505 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:34.906963 2523870 logs.go:123] Gathering logs for container status ...
	I0915 06:41:34.906995 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:41:34.978748 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:34.978778 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0915 06:41:34.978858 2523870 out.go:270] X Problems detected in kubelet:
	W0915 06:41:34.978873 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.978881 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.978887 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.978894 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.979024 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:34.979041 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:34.979048 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:41:44.992518 2523870 system_pods.go:59] 18 kube-system pods found
	I0915 06:41:44.992563 2523870 system_pods.go:61] "coredns-7c65d6cfc9-7vkbz" [6ea47236-17f3-4492-8780-9ad56187f489] Running
	I0915 06:41:44.992570 2523870 system_pods.go:61] "csi-hostpath-attacher-0" [fbcdc315-eaad-4112-a529-eec22f5f7dce] Running
	I0915 06:41:44.992575 2523870 system_pods.go:61] "csi-hostpath-resizer-0" [f5efb463-f551-4dde-87d2-5ec91a566e81] Running
	I0915 06:41:44.992579 2523870 system_pods.go:61] "csi-hostpathplugin-cgcjb" [58bfa35e-116a-45b1-a414-47dadde393c6] Running
	I0915 06:41:44.992583 2523870 system_pods.go:61] "etcd-addons-078133" [b238897b-6598-4d41-915c-57e032f1b6ad] Running
	I0915 06:41:44.992589 2523870 system_pods.go:61] "kindnet-h6zsk" [9c090aa0-3e32-475a-9090-5423f0449354] Running
	I0915 06:41:44.992593 2523870 system_pods.go:61] "kube-apiserver-addons-078133" [9606256f-7a4c-47eb-91e3-29271e631613] Running
	I0915 06:41:44.992597 2523870 system_pods.go:61] "kube-controller-manager-addons-078133" [fa465a0e-97b0-4d5f-af33-a26dbf7e3985] Running
	I0915 06:41:44.992602 2523870 system_pods.go:61] "kube-ingress-dns-minikube" [d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6] Running
	I0915 06:41:44.992637 2523870 system_pods.go:61] "kube-proxy-fjj4k" [be724ff8-b220-4bfb-961c-c6cf462d9ddc] Running
	I0915 06:41:44.992646 2523870 system_pods.go:61] "kube-scheduler-addons-078133" [8a13493f-2796-4a2e-b83b-2f5f8f4f09bb] Running
	I0915 06:41:44.992651 2523870 system_pods.go:61] "metrics-server-84c5f94fbc-gfw99" [8d80d558-0f92-43df-9e1e-035dad596039] Running
	I0915 06:41:44.992655 2523870 system_pods.go:61] "nvidia-device-plugin-daemonset-cwx62" [6bc66e81-1049-45ef-b236-d0ad12ba82cf] Running
	I0915 06:41:44.992658 2523870 system_pods.go:61] "registry-66c9cd494c-dvjjx" [f6332eec-8451-4a18-b1e4-899a9c98a398] Running
	I0915 06:41:44.992662 2523870 system_pods.go:61] "registry-proxy-pph5w" [5bfdb7e0-869e-409d-b185-7e7c0d0386d6] Running
	I0915 06:41:44.992666 2523870 system_pods.go:61] "snapshot-controller-56fcc65765-6lsdb" [40abaaf0-851b-4368-bb6c-c43e5fd96b18] Running
	I0915 06:41:44.992669 2523870 system_pods.go:61] "snapshot-controller-56fcc65765-9dh55" [aac62e95-b572-45ce-ba9b-5b4451c8578b] Running
	I0915 06:41:44.992673 2523870 system_pods.go:61] "storage-provisioner" [30881b3f-dd6b-47c6-8171-db912be01758] Running
	I0915 06:41:44.992680 2523870 system_pods.go:74] duration metric: took 11.166385954s to wait for pod list to return data ...
	I0915 06:41:44.992692 2523870 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:41:44.995239 2523870 default_sa.go:45] found service account: "default"
	I0915 06:41:44.995269 2523870 default_sa.go:55] duration metric: took 2.570121ms for default service account to be created ...
	I0915 06:41:44.995278 2523870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:41:45.005688 2523870 system_pods.go:86] 18 kube-system pods found
	I0915 06:41:45.005731 2523870 system_pods.go:89] "coredns-7c65d6cfc9-7vkbz" [6ea47236-17f3-4492-8780-9ad56187f489] Running
	I0915 06:41:45.005739 2523870 system_pods.go:89] "csi-hostpath-attacher-0" [fbcdc315-eaad-4112-a529-eec22f5f7dce] Running
	I0915 06:41:45.005745 2523870 system_pods.go:89] "csi-hostpath-resizer-0" [f5efb463-f551-4dde-87d2-5ec91a566e81] Running
	I0915 06:41:45.005749 2523870 system_pods.go:89] "csi-hostpathplugin-cgcjb" [58bfa35e-116a-45b1-a414-47dadde393c6] Running
	I0915 06:41:45.005753 2523870 system_pods.go:89] "etcd-addons-078133" [b238897b-6598-4d41-915c-57e032f1b6ad] Running
	I0915 06:41:45.005758 2523870 system_pods.go:89] "kindnet-h6zsk" [9c090aa0-3e32-475a-9090-5423f0449354] Running
	I0915 06:41:45.005762 2523870 system_pods.go:89] "kube-apiserver-addons-078133" [9606256f-7a4c-47eb-91e3-29271e631613] Running
	I0915 06:41:45.005766 2523870 system_pods.go:89] "kube-controller-manager-addons-078133" [fa465a0e-97b0-4d5f-af33-a26dbf7e3985] Running
	I0915 06:41:45.005771 2523870 system_pods.go:89] "kube-ingress-dns-minikube" [d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6] Running
	I0915 06:41:45.005776 2523870 system_pods.go:89] "kube-proxy-fjj4k" [be724ff8-b220-4bfb-961c-c6cf462d9ddc] Running
	I0915 06:41:45.005780 2523870 system_pods.go:89] "kube-scheduler-addons-078133" [8a13493f-2796-4a2e-b83b-2f5f8f4f09bb] Running
	I0915 06:41:45.005785 2523870 system_pods.go:89] "metrics-server-84c5f94fbc-gfw99" [8d80d558-0f92-43df-9e1e-035dad596039] Running
	I0915 06:41:45.005792 2523870 system_pods.go:89] "nvidia-device-plugin-daemonset-cwx62" [6bc66e81-1049-45ef-b236-d0ad12ba82cf] Running
	I0915 06:41:45.005797 2523870 system_pods.go:89] "registry-66c9cd494c-dvjjx" [f6332eec-8451-4a18-b1e4-899a9c98a398] Running
	I0915 06:41:45.005801 2523870 system_pods.go:89] "registry-proxy-pph5w" [5bfdb7e0-869e-409d-b185-7e7c0d0386d6] Running
	I0915 06:41:45.005805 2523870 system_pods.go:89] "snapshot-controller-56fcc65765-6lsdb" [40abaaf0-851b-4368-bb6c-c43e5fd96b18] Running
	I0915 06:41:45.005811 2523870 system_pods.go:89] "snapshot-controller-56fcc65765-9dh55" [aac62e95-b572-45ce-ba9b-5b4451c8578b] Running
	I0915 06:41:45.005815 2523870 system_pods.go:89] "storage-provisioner" [30881b3f-dd6b-47c6-8171-db912be01758] Running
	I0915 06:41:45.005824 2523870 system_pods.go:126] duration metric: took 10.539108ms to wait for k8s-apps to be running ...
	I0915 06:41:45.005833 2523870 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:41:45.005903 2523870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:41:45.040231 2523870 system_svc.go:56] duration metric: took 34.383305ms WaitForService to wait for kubelet
	I0915 06:41:45.041762 2523870 kubeadm.go:582] duration metric: took 2m36.001781462s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:41:45.041984 2523870 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:41:45.049036 2523870 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 06:41:45.055344 2523870 node_conditions.go:123] node cpu capacity is 2
	I0915 06:41:45.061556 2523870 node_conditions.go:105] duration metric: took 17.573916ms to run NodePressure ...
	I0915 06:41:45.061585 2523870 start.go:241] waiting for startup goroutines ...
	I0915 06:41:45.061593 2523870 start.go:246] waiting for cluster config update ...
	I0915 06:41:45.061614 2523870 start.go:255] writing updated cluster config ...
	I0915 06:41:45.061999 2523870 ssh_runner.go:195] Run: rm -f paused
	I0915 06:41:45.465387 2523870 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:41:45.468637 2523870 out.go:177] * Done! kubectl is now configured to use "addons-078133" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 06:51:00 addons-078133 crio[962]: time="2024-09-15 06:51:00.762079334Z" level=info msg="Stopped pod sandbox: 871c0a5b6000bfa4cbc6ba1e9168a7212f178eef479ebf475c10dd171a4facf7" id=2d01a35c-7bbf-48e0-a537-0ecc27545d1e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.482425887Z" level=info msg="Stopping container: ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7 (timeout: 30s)" id=eba4e344-3413-4f02-9459-458560fa5bf1 name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:51:01 addons-078133 conmon[3572]: conmon ef9109f778ad6798c55b <ninfo>: container 3583 exited with status 2
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.511326377Z" level=info msg="Stopping container: 2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6 (timeout: 30s)" id=d82a9749-3664-4d52-b3d2-e470adea904a name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.631395066Z" level=info msg="Stopped container ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7: kube-system/registry-66c9cd494c-dvjjx/registry" id=eba4e344-3413-4f02-9459-458560fa5bf1 name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.631893327Z" level=info msg="Stopping pod sandbox: 7052292cc2a09c71c45ad171adffe6744f5766b04f62684b7e6f8b3e423fad59" id=b10fd95e-3456-4258-adec-0f2d380e3787 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.632124049Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-dvjjx Namespace:kube-system ID:7052292cc2a09c71c45ad171adffe6744f5766b04f62684b7e6f8b3e423fad59 UID:f6332eec-8451-4a18-b1e4-899a9c98a398 NetNS:/var/run/netns/1a1a18db-3eca-4de1-8f89-a980525cdd43 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.632259086Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-dvjjx from CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.683870743Z" level=info msg="Stopped pod sandbox: 7052292cc2a09c71c45ad171adffe6744f5766b04f62684b7e6f8b3e423fad59" id=b10fd95e-3456-4258-adec-0f2d380e3787 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.686667057Z" level=info msg="Stopped container 2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6: kube-system/registry-proxy-pph5w/registry-proxy" id=d82a9749-3664-4d52-b3d2-e470adea904a name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.687059753Z" level=info msg="Stopping pod sandbox: 5541934ac7b796599abada946b2aaa536b08ea376e80b39a4c334e780e204716" id=5628d6b1-af56-436e-aa25-4fc178534c8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.697858636Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-UIW77T77KQNES5CL - [0:0]\n:KUBE-HP-ZMCNS3RKMDLOPICN - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-CCJ7XU7SKW32XEEB - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xtz9n_ingress-nginx_80a49e6a-775f-4a72-ae75-261096c46397_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-UIW77T77KQNES5CL\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xtz9n_ingress-nginx_80a49e6a-775f-4a72-ae75-261096c46397_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-ZMCNS3RKMDLOPICN\n-A KUBE-HP-UIW77T77KQNES5CL -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xtz9n_ingress-nginx_80a49e6a-775f-4a72-ae75-261096c46397_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-UIW77T77KQNES5CL -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xtz9n_ingress-nginx_80a49e6a-775f-4a72-ae
75-261096c46397_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.20:443\n-A KUBE-HP-ZMCNS3RKMDLOPICN -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xtz9n_ingress-nginx_80a49e6a-775f-4a72-ae75-261096c46397_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-ZMCNS3RKMDLOPICN -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xtz9n_ingress-nginx_80a49e6a-775f-4a72-ae75-261096c46397_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.20:80\n-X KUBE-HP-CCJ7XU7SKW32XEEB\nCOMMIT\n"
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.702423038Z" level=info msg="Closing host port tcp:5000"
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.704131411Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.704334286Z" level=info msg="Got pod network &{Name:registry-proxy-pph5w Namespace:kube-system ID:5541934ac7b796599abada946b2aaa536b08ea376e80b39a4c334e780e204716 UID:5bfdb7e0-869e-409d-b185-7e7c0d0386d6 NetNS:/var/run/netns/c57620a8-a968-46b0-8464-24a21e147942 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.704465770Z" level=info msg="Deleting pod kube-system_registry-proxy-pph5w from CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:51:01 addons-078133 crio[962]: time="2024-09-15 06:51:01.753050293Z" level=info msg="Stopped pod sandbox: 5541934ac7b796599abada946b2aaa536b08ea376e80b39a4c334e780e204716" id=5628d6b1-af56-436e-aa25-4fc178534c8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:51:02 addons-078133 crio[962]: time="2024-09-15 06:51:02.081162652Z" level=info msg="Removing container: 2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6" id=09f8a142-d99e-44b6-87ac-450e743aa86a name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 15 06:51:02 addons-078133 crio[962]: time="2024-09-15 06:51:02.083205925Z" level=info msg="Stopping pod sandbox: 78a7e3e291856fbaa93e0afe0351416e246cd2431fb9024b93740bdf9dbeac5e" id=b5e7a8c8-da16-4b97-bed8-2ea3ef49b7f8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:51:02 addons-078133 crio[962]: time="2024-09-15 06:51:02.083703702Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:78a7e3e291856fbaa93e0afe0351416e246cd2431fb9024b93740bdf9dbeac5e UID:acf2ee38-acc9-4cb8-a5f7-5fda6973360c NetNS:/var/run/netns/8084ede0-9321-4f05-a7d5-58db99e7b8e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:51:02 addons-078133 crio[962]: time="2024-09-15 06:51:02.083859768Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:51:02 addons-078133 crio[962]: time="2024-09-15 06:51:02.116663736Z" level=info msg="Removed container 2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6: kube-system/registry-proxy-pph5w/registry-proxy" id=09f8a142-d99e-44b6-87ac-450e743aa86a name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 15 06:51:02 addons-078133 crio[962]: time="2024-09-15 06:51:02.123152021Z" level=info msg="Removing container: ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7" id=e56078ee-a25f-4c30-85d5-13a838510d95 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 15 06:51:02 addons-078133 crio[962]: time="2024-09-15 06:51:02.154778861Z" level=info msg="Stopped pod sandbox: 78a7e3e291856fbaa93e0afe0351416e246cd2431fb9024b93740bdf9dbeac5e" id=b5e7a8c8-da16-4b97-bed8-2ea3ef49b7f8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:51:02 addons-078133 crio[962]: time="2024-09-15 06:51:02.185249224Z" level=info msg="Removed container ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7: kube-system/registry-66c9cd494c-dvjjx/registry" id=e56078ee-a25f-4c30-85d5-13a838510d95 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	257b159f49793       docker.io/library/busybox@sha256:71e065368796c7368a99a072019b9fe73e28e225ae9882430579ec49a1e46235                            2 seconds ago       Exited              busybox                   0                   78a7e3e291856       test-local-path
	68cc56ea4b119       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            15 seconds ago      Exited              gadget                    7                   9cabfe60e7ce1       gadget-css4m
	9ddfb8c4ba14f       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             10 minutes ago      Running             controller                0                   5078eb39f626b       ingress-nginx-controller-bc57996ff-xtz9n
	0827a067b0cde       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 10 minutes ago      Running             gcp-auth                  0                   0dde73874d0cd       gcp-auth-89d5ffd79-dfdjh
	aa099ed135a63       gcr.io/cloud-spanner-emulator/emulator@sha256:41ec188288c7943f488600462b2b74002814e52439be82d15de33c3ee4898a58               10 minutes ago      Running             cloud-spanner-emulator    0                   51508c8b04c54       cloud-spanner-emulator-769b77f747-pw84g
	5564eb7326685       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   10 minutes ago      Exited              patch                     0                   aa76c10a86aa6       ingress-nginx-admission-patch-sqnfz
	ca5a0f6658f52       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             10 minutes ago      Running             local-path-provisioner    0                   d20b2c660e532       local-path-provisioner-86d989889c-w722z
	aa66b6bbbe960       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   10 minutes ago      Exited              create                    0                   189ef42c5e81a       ingress-nginx-admission-create-b57t6
	c1c95dfa2a499       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        11 minutes ago      Running             metrics-server            0                   6b2883d632ffa       metrics-server-84c5f94fbc-gfw99
	2246ddeb20532       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             11 minutes ago      Running             minikube-ingress-dns      0                   02a84f07cd68f       kube-ingress-dns-minikube
	d271b7f778ca6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             11 minutes ago      Running             storage-provisioner       0                   e16867b58e664       storage-provisioner
	85daa7360e5e9       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             11 minutes ago      Running             coredns                   0                   9ab5526bc1400       coredns-7c65d6cfc9-7vkbz
	0dd8f2e1d527f       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             11 minutes ago      Running             kindnet-cni               0                   4ab45f1d528e9       kindnet-h6zsk
	7effe62b4c9a3       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             11 minutes ago      Running             kube-proxy                0                   519d37d41f025       kube-proxy-fjj4k
	e96ddc5409269       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             12 minutes ago      Running             kube-apiserver            0                   1b90d84bbc3b0       kube-apiserver-addons-078133
	9b04df1237c35       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             12 minutes ago      Running             kube-scheduler            0                   5bcd311de4186       kube-scheduler-addons-078133
	fc20989b36b93       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             12 minutes ago      Running             kube-controller-manager   0                   37863f70ae7a4       kube-controller-manager-addons-078133
	aa1f1d2a843d0       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             12 minutes ago      Running             etcd                      0                   037f467425e39       etcd-addons-078133
	
	
	==> coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] <==
	[INFO] 10.244.0.7:60956 - 40381 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116937s
	[INFO] 10.244.0.7:45161 - 29366 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002240627s
	[INFO] 10.244.0.7:45161 - 32945 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003202302s
	[INFO] 10.244.0.7:37659 - 38912 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000204787s
	[INFO] 10.244.0.7:37659 - 18694 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141732s
	[INFO] 10.244.0.7:46398 - 25256 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000213993s
	[INFO] 10.244.0.7:46398 - 24995 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00027443s
	[INFO] 10.244.0.7:47479 - 52991 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072909s
	[INFO] 10.244.0.7:47479 - 46333 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005142s
	[INFO] 10.244.0.7:49213 - 1338 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005339s
	[INFO] 10.244.0.7:49213 - 49467 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072876s
	[INFO] 10.244.0.7:42802 - 41891 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00141695s
	[INFO] 10.244.0.7:42802 - 39841 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001484666s
	[INFO] 10.244.0.7:38900 - 44116 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066592s
	[INFO] 10.244.0.7:38900 - 30299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116486s
	[INFO] 10.244.0.19:47931 - 25633 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002470447s
	[INFO] 10.244.0.19:33148 - 45348 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002478143s
	[INFO] 10.244.0.19:56417 - 22070 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147508s
	[INFO] 10.244.0.19:50454 - 60030 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133371s
	[INFO] 10.244.0.19:42936 - 16948 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128678s
	[INFO] 10.244.0.19:52660 - 34977 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125519s
	[INFO] 10.244.0.19:59020 - 55342 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003112933s
	[INFO] 10.244.0.19:49810 - 53119 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003366441s
	[INFO] 10.244.0.19:56751 - 42495 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005208407s
	[INFO] 10.244.0.19:42362 - 42298 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.005481853s
	
	
	==> describe nodes <==
	Name:               addons-078133
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-078133
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-078133
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_39_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-078133
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:39:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-078133
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:50:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:50:07 +0000   Sun, 15 Sep 2024 06:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:50:07 +0000   Sun, 15 Sep 2024 06:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:50:07 +0000   Sun, 15 Sep 2024 06:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:50:07 +0000   Sun, 15 Sep 2024 06:39:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-078133
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd8b84dea15e4d35b14dc406bd3d7d26
	  System UUID:                a2ace0dd-aa7e-4476-816d-37514df39de9
	  Boot ID:                    86c781ec-01d2-4efb-aba1-a43f302ac663
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  default                     cloud-spanner-emulator-769b77f747-pw84g     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-css4m                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-dfdjh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xtz9n    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-7vkbz                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-addons-078133                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-h6zsk                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-078133                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-078133       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-fjj4k                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-078133                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-gfw99             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-w722z     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-078133 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-078133 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-078133 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node addons-078133 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node addons-078133 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node addons-078133 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node addons-078133 event: Registered Node addons-078133 in Controller
	  Normal   NodeReady                11m                kubelet          Node addons-078133 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep15 05:34] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000091 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001089] FS-Cache: O-cookie d=000000009ec4a1b9{9P.session} n=00000000933e989b
	[  +0.001105] FS-Cache: O-key=[10] '34333036383438313233'
	[  +0.000796] FS-Cache: N-cookie c=00000092 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000965] FS-Cache: N-cookie d=000000009ec4a1b9{9P.session} n=00000000c50af53f
	[  +0.001363] FS-Cache: N-key=[10] '34333036383438313233'
	[Sep15 06:08] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] <==
	{"level":"info","ts":"2024-09-15T06:38:58.060202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-15T06:38:58.060258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-15T06:38:58.060291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T06:38:58.060337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-15T06:38:58.060369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T06:38:58.065025Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-078133 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:38:58.065273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:38:58.065678Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.068367Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:38:58.068608Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:38:58.068687Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:38:58.069414Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:38:58.070446Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-15T06:38:58.073106Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.073273Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.088962Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.089741Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:38:58.090677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:39:10.078651Z","caller":"traceutil/trace.go:171","msg":"trace[978204264] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"138.849688ms","start":"2024-09-15T06:39:09.939783Z","end":"2024-09-15T06:39:10.078632Z","steps":["trace[978204264] 'process raft request'  (duration: 95.382705ms)","trace[978204264] 'compare'  (duration: 42.981654ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:39:13.438537Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.182536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:39:13.438634Z","caller":"traceutil/trace.go:171","msg":"trace[1902515032] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:440; }","duration":"112.30017ms","start":"2024-09-15T06:39:13.326320Z","end":"2024-09-15T06:39:13.438620Z","steps":["trace[1902515032] 'agreement among raft nodes before linearized reading'  (duration: 83.629989ms)","trace[1902515032] 'range keys from in-memory index tree'  (duration: 28.533716ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:39:51.757080Z","caller":"traceutil/trace.go:171","msg":"trace[1907155975] transaction","detail":"{read_only:false; response_revision:896; number_of_response:1; }","duration":"103.53271ms","start":"2024-09-15T06:39:51.653528Z","end":"2024-09-15T06:39:51.757061Z","steps":["trace[1907155975] 'process raft request'  (duration: 79.5189ms)","trace[1907155975] 'compare'  (duration: 23.406243ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:48:58.204333Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1530}
	{"level":"info","ts":"2024-09-15T06:48:58.238285Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1530,"took":"33.495045ms","hash":3104697584,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3293184,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-15T06:48:58.238443Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3104697584,"revision":1530,"compact-revision":-1}
	
	
	==> gcp-auth [0827a067b0cde94dfdfe774133d38b55169c16cd00de8fa5c926fac9c7c30441] <==
	2024/09/15 06:40:46 GCP Auth Webhook started!
	2024/09/15 06:41:45 Ready to marshal response ...
	2024/09/15 06:41:45 Ready to write response ...
	2024/09/15 06:41:45 Ready to marshal response ...
	2024/09/15 06:41:45 Ready to write response ...
	2024/09/15 06:41:46 Ready to marshal response ...
	2024/09/15 06:41:46 Ready to write response ...
	2024/09/15 06:49:53 Ready to marshal response ...
	2024/09/15 06:49:53 Ready to write response ...
	2024/09/15 06:50:00 Ready to marshal response ...
	2024/09/15 06:50:00 Ready to write response ...
	2024/09/15 06:50:20 Ready to marshal response ...
	2024/09/15 06:50:20 Ready to write response ...
	2024/09/15 06:50:54 Ready to marshal response ...
	2024/09/15 06:50:54 Ready to write response ...
	2024/09/15 06:50:55 Ready to marshal response ...
	2024/09/15 06:50:55 Ready to write response ...
	
	
	==> kernel <==
	 06:51:03 up 14:33,  0 users,  load average: 0.50, 0.64, 1.41
	Linux addons-078133 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] <==
	I0915 06:49:00.843409       1 main.go:299] handling current node
	I0915 06:49:10.837043       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:49:10.837084       1 main.go:299] handling current node
	I0915 06:49:20.837269       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:49:20.837996       1 main.go:299] handling current node
	I0915 06:49:30.843147       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:49:30.843270       1 main.go:299] handling current node
	I0915 06:49:40.836887       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:49:40.836923       1 main.go:299] handling current node
	I0915 06:49:50.840747       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:49:50.840784       1 main.go:299] handling current node
	I0915 06:50:00.836918       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:50:00.837065       1 main.go:299] handling current node
	I0915 06:50:10.836486       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:50:10.836519       1 main.go:299] handling current node
	I0915 06:50:20.836969       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:50:20.837012       1 main.go:299] handling current node
	I0915 06:50:30.837048       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:50:30.837081       1 main.go:299] handling current node
	I0915 06:50:40.836922       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:50:40.836961       1 main.go:299] handling current node
	I0915 06:50:50.837372       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:50:50.837407       1 main.go:299] handling current node
	I0915 06:51:00.837187       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:51:00.837341       1 main.go:299] handling current node
	
	
	==> kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] <==
	I0915 06:40:15.645050       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0915 06:41:11.371389       1 handler_proxy.go:99] no RequestInfo found in the context
	E0915 06:41:11.371558       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0915 06:41:11.372437       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.87.151:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.87.151:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.87.151:443: connect: connection refused" logger="UnhandledError"
	E0915 06:41:11.457253       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0915 06:41:11.516912       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0915 06:50:05.848869       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0915 06:50:29.043058       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	E0915 06:50:29.246052       1 watch.go:250] "Unhandled Error" err="write tcp 192.168.49.2:8443->10.244.0.13:46336: write: connection reset by peer" logger="UnhandledError"
	I0915 06:50:35.680485       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.680547       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.774314       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.774371       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.811502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.811566       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.819471       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.820168       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.950749       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.950798       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:50:36.819999       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:50:36.951215       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0915 06:50:36.956283       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] <==
	I0915 06:50:38.373526       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0915 06:50:38.373567       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:50:38.852680       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0915 06:50:38.852735       1 shared_informer.go:320] Caches are synced for garbage collector
	W0915 06:50:39.841726       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:50:39.841769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:50:40.752297       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:50:40.752350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:50:41.346554       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:50:41.346601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:50:42.731635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="4.456µs"
	W0915 06:50:45.752504       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:50:45.752549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:50:46.080721       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:50:46.080890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:50:47.443222       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:50:47.443268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:50:52.941704       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0915 06:50:55.599446       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:50:55.599491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:50:58.338493       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:50:58.338632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:50:59.721991       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:50:59.722035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:51:01.460772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.603µs"
	
	
	==> kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] <==
	I0915 06:39:13.431040       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:39:14.654548       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:39:14.654733       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:39:14.806709       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:39:14.806853       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:39:14.809136       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:39:14.809744       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:39:14.809813       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:39:14.834509       1 config.go:199] "Starting service config controller"
	I0915 06:39:14.847771       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:39:14.854180       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:39:14.881895       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:39:14.861657       1 config.go:328] "Starting node config controller"
	I0915 06:39:14.882892       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:39:14.982166       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:39:14.985602       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:39:14.987423       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] <==
	W0915 06:39:02.337994       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0915 06:39:02.338097       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:39:02.340793       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 06:39:02.338171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:39:02.340988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 06:39:02.341068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:39:02.341150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 06:39:02.341224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:39:02.341315       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:39:02.341387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:39:02.341464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:39:02.341546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:39:02.341632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 06:39:02.341721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0915 06:39:02.339535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0915 06:39:03.627072       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:51:01 addons-078133 kubelet[1502]: I0915 06:51:01.845396    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bfdb7e0-869e-409d-b185-7e7c0d0386d6-kube-api-access-z7hxj" (OuterVolumeSpecName: "kube-api-access-z7hxj") pod "5bfdb7e0-869e-409d-b185-7e7c0d0386d6" (UID: "5bfdb7e0-869e-409d-b185-7e7c0d0386d6"). InnerVolumeSpecName "kube-api-access-z7hxj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:51:01 addons-078133 kubelet[1502]: I0915 06:51:01.846470    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6332eec-8451-4a18-b1e4-899a9c98a398-kube-api-access-pmjkc" (OuterVolumeSpecName: "kube-api-access-pmjkc") pod "f6332eec-8451-4a18-b1e4-899a9c98a398" (UID: "f6332eec-8451-4a18-b1e4-899a9c98a398"). InnerVolumeSpecName "kube-api-access-pmjkc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:51:01 addons-078133 kubelet[1502]: I0915 06:51:01.943689    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z7hxj\" (UniqueName: \"kubernetes.io/projected/5bfdb7e0-869e-409d-b185-7e7c0d0386d6-kube-api-access-z7hxj\") on node \"addons-078133\" DevicePath \"\""
	Sep 15 06:51:01 addons-078133 kubelet[1502]: I0915 06:51:01.943731    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pmjkc\" (UniqueName: \"kubernetes.io/projected/f6332eec-8451-4a18-b1e4-899a9c98a398-kube-api-access-pmjkc\") on node \"addons-078133\" DevicePath \"\""
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.076610    1502 scope.go:117] "RemoveContainer" containerID="2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.117299    1502 scope.go:117] "RemoveContainer" containerID="2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: E0915 06:51:02.118346    1502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6\": container with ID starting with 2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6 not found: ID does not exist" containerID="2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.118387    1502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6"} err="failed to get container status \"2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6\": rpc error: code = NotFound desc = could not find container \"2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6\": container with ID starting with 2de78f133a12ed0701b6d5af26fd71260e96ab0cbb5729fadeceea243c00ecc6 not found: ID does not exist"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.118427    1502 scope.go:117] "RemoveContainer" containerID="ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.185665    1502 scope.go:117] "RemoveContainer" containerID="ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: E0915 06:51:02.186313    1502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7\": container with ID starting with ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7 not found: ID does not exist" containerID="ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.186346    1502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7"} err="failed to get container status \"ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7\": rpc error: code = NotFound desc = could not find container \"ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7\": container with ID starting with ef9109f778ad6798c55b86d53928d6af17f2bb04431de01623b645dd0c0e59b7 not found: ID does not exist"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.243531    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bfdb7e0-869e-409d-b185-7e7c0d0386d6" path="/var/lib/kubelet/pods/5bfdb7e0-869e-409d-b185-7e7c0d0386d6/volumes"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.243910    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef27436-fc89-4c92-ab3c-64d442224926" path="/var/lib/kubelet/pods/9ef27436-fc89-4c92-ab3c-64d442224926/volumes"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.244133    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6332eec-8451-4a18-b1e4-899a9c98a398" path="/var/lib/kubelet/pods/f6332eec-8451-4a18-b1e4-899a9c98a398/volumes"
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.245238    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/acf2ee38-acc9-4cb8-a5f7-5fda6973360c-pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab\") pod \"acf2ee38-acc9-4cb8-a5f7-5fda6973360c\" (UID: \"acf2ee38-acc9-4cb8-a5f7-5fda6973360c\") "
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.245492    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/acf2ee38-acc9-4cb8-a5f7-5fda6973360c-gcp-creds\") pod \"acf2ee38-acc9-4cb8-a5f7-5fda6973360c\" (UID: \"acf2ee38-acc9-4cb8-a5f7-5fda6973360c\") "
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.245667    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9gns\" (UniqueName: \"kubernetes.io/projected/acf2ee38-acc9-4cb8-a5f7-5fda6973360c-kube-api-access-n9gns\") pod \"acf2ee38-acc9-4cb8-a5f7-5fda6973360c\" (UID: \"acf2ee38-acc9-4cb8-a5f7-5fda6973360c\") "
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.245431    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf2ee38-acc9-4cb8-a5f7-5fda6973360c-pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab" (OuterVolumeSpecName: "data") pod "acf2ee38-acc9-4cb8-a5f7-5fda6973360c" (UID: "acf2ee38-acc9-4cb8-a5f7-5fda6973360c"). InnerVolumeSpecName "pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.245599    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acf2ee38-acc9-4cb8-a5f7-5fda6973360c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "acf2ee38-acc9-4cb8-a5f7-5fda6973360c" (UID: "acf2ee38-acc9-4cb8-a5f7-5fda6973360c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.248207    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf2ee38-acc9-4cb8-a5f7-5fda6973360c-kube-api-access-n9gns" (OuterVolumeSpecName: "kube-api-access-n9gns") pod "acf2ee38-acc9-4cb8-a5f7-5fda6973360c" (UID: "acf2ee38-acc9-4cb8-a5f7-5fda6973360c"). InnerVolumeSpecName "kube-api-access-n9gns". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.346596    1502 reconciler_common.go:288] "Volume detached for volume \"pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab\" (UniqueName: \"kubernetes.io/host-path/acf2ee38-acc9-4cb8-a5f7-5fda6973360c-pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab\") on node \"addons-078133\" DevicePath \"\""
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.346644    1502 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/acf2ee38-acc9-4cb8-a5f7-5fda6973360c-gcp-creds\") on node \"addons-078133\" DevicePath \"\""
	Sep 15 06:51:02 addons-078133 kubelet[1502]: I0915 06:51:02.346658    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-n9gns\" (UniqueName: \"kubernetes.io/projected/acf2ee38-acc9-4cb8-a5f7-5fda6973360c-kube-api-access-n9gns\") on node \"addons-078133\" DevicePath \"\""
	Sep 15 06:51:03 addons-078133 kubelet[1502]: I0915 06:51:03.089388    1502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78a7e3e291856fbaa93e0afe0351416e246cd2431fb9024b93740bdf9dbeac5e"
	
	
	==> storage-provisioner [d271b7f778ca6a5e43c6790e874afaf722384211e819eedb0f87091dcf8bb3ca] <==
	I0915 06:39:51.876457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:39:52.092367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:39:52.122251       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:39:52.141776       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:39:52.142096       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-078133_b714d925-ab44-41be-bcf1-c4695a08fcc2!
	I0915 06:39:52.143415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c1414a91-3bba-456a-9087-6984d4f1a1e5", APIVersion:"v1", ResourceVersion:"932", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-078133_b714d925-ab44-41be-bcf1-c4695a08fcc2 became leader
	I0915 06:39:52.243076       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-078133_b714d925-ab44-41be-bcf1-c4695a08fcc2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-078133 -n addons-078133
helpers_test.go:261: (dbg) Run:  kubectl --context addons-078133 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-b57t6 ingress-nginx-admission-patch-sqnfz helper-pod-delete-pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-078133 describe pod busybox ingress-nginx-admission-create-b57t6 ingress-nginx-admission-patch-sqnfz helper-pod-delete-pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-078133 describe pod busybox ingress-nginx-admission-create-b57t6 ingress-nginx-admission-patch-sqnfz helper-pod-delete-pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab: exit status 1 (197.396233ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-078133/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:41:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x9nfs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x9nfs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m20s                   default-scheduler  Successfully assigned default/busybox to addons-078133
	  Normal   Pulling    7m48s (x4 over 9m19s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m19s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m19s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m33s (x6 over 9m19s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m18s (x20 over 9m19s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-b57t6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sqnfz" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-078133 describe pod busybox ingress-nginx-admission-create-b57t6 ingress-nginx-admission-patch-sqnfz helper-pod-delete-pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-078133 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-078133 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-078133 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cc263dd2-988c-4601-9d56-53793e6c08a3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cc263dd2-988c-4601-9d56-53793e6c08a3] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004255644s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-078133 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.499907172s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-078133 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-078133 addons disable ingress-dns --alsologtostderr -v=1: (1.366252342s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-078133 addons disable ingress --alsologtostderr -v=1: (7.808346648s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-078133
helpers_test.go:235: (dbg) docker inspect addons-078133:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde",
	        "Created": "2024-09-15T06:38:37.750228282Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2524440,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:38:37.907510174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1b71fa87733590eb4674b16f6945626ae533f3af37066893e3fd70eb9476268",
	        "ResolvConfPath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/hostname",
	        "HostsPath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/hosts",
	        "LogPath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde-json.log",
	        "Name": "/addons-078133",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-078133:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-078133",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420-init/diff:/var/lib/docker/overlay2/72792481ba3fe11d67c9c5bebed6121eb09dffa903ddf816dfb06e703f2d9d5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-078133",
	                "Source": "/var/lib/docker/volumes/addons-078133/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-078133",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-078133",
	                "name.minikube.sigs.k8s.io": "addons-078133",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c8d7e1050dbe4977f54b06c2224002186fb12e89f8d90b585337ed8c180c6bd",
	            "SandboxKey": "/var/run/docker/netns/0c8d7e1050db",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35748"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35749"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35752"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35750"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35751"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-078133": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "61892ade19da7989ac86d074df0c7f6076bb69e05029d3382c7c93eab898c4ab",
	                    "EndpointID": "5578870202f5d628a4be39c5ca56e5901d1922ca753b45b5f33733d1f214df65",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-078133",
	                        "7434fa99399a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-078133 -n addons-078133
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-078133 logs -n 25: (1.55783195s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-196406                                                                     | download-only-196406   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| delete  | -p download-only-600407                                                                     | download-only-600407   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| start   | --download-only -p                                                                          | download-docker-842211 | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | download-docker-842211                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-842211                                                                   | download-docker-842211 | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-404653   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | binary-mirror-404653                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33149                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-404653                                                                     | binary-mirror-404653   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| addons  | enable dashboard -p                                                                         | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | addons-078133                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | addons-078133                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-078133 --wait=true                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-078133 addons                                                                        | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-078133 addons                                                                        | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | -p addons-078133                                                                            |                        |         |         |                     |                     |
	| ip      | addons-078133 ip                                                                            | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-078133 ssh cat                                                                       | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | /opt/local-path-provisioner/pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | addons-078133                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | -p addons-078133                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:52 UTC |
	|         | addons-078133                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-078133 ssh curl -s                                                                   | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:52 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-078133 ip                                                                            | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:38:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:38:12.787229 2523870 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:38:12.787649 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:38:12.787663 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:38:12.787669 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:38:12.787948 2523870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 06:38:12.788417 2523870 out.go:352] Setting JSON to false
	I0915 06:38:12.789322 2523870 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":51644,"bootTime":1726330649,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0915 06:38:12.789406 2523870 start.go:139] virtualization:  
	I0915 06:38:12.792757 2523870 out.go:177] * [addons-078133] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 06:38:12.795650 2523870 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:38:12.795696 2523870 notify.go:220] Checking for updates...
	I0915 06:38:12.799075 2523870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:38:12.801817 2523870 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 06:38:12.804477 2523870 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	I0915 06:38:12.807247 2523870 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 06:38:12.809885 2523870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:38:12.812844 2523870 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:38:12.839036 2523870 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:38:12.839177 2523870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:38:12.891358 2523870 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:38:12.881981504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:38:12.891480 2523870 docker.go:318] overlay module found
	I0915 06:38:12.895859 2523870 out.go:177] * Using the docker driver based on user configuration
	I0915 06:38:12.898575 2523870 start.go:297] selected driver: docker
	I0915 06:38:12.898603 2523870 start.go:901] validating driver "docker" against <nil>
	I0915 06:38:12.898625 2523870 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:38:12.899275 2523870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:38:12.952158 2523870 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:38:12.942889904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:38:12.952417 2523870 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:38:12.952666 2523870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:38:12.955396 2523870 out.go:177] * Using Docker driver with root privileges
	I0915 06:38:12.957978 2523870 cni.go:84] Creating CNI manager for ""
	I0915 06:38:12.958053 2523870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:38:12.958067 2523870 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 06:38:12.958154 2523870 start.go:340] cluster config:
	{Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:38:12.961074 2523870 out.go:177] * Starting "addons-078133" primary control-plane node in "addons-078133" cluster
	I0915 06:38:12.963705 2523870 cache.go:121] Beginning downloading kic base image for docker with crio
	I0915 06:38:12.966437 2523870 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:38:12.969038 2523870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:38:12.969094 2523870 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0915 06:38:12.969106 2523870 cache.go:56] Caching tarball of preloaded images
	I0915 06:38:12.969131 2523870 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:38:12.969194 2523870 preload.go:172] Found /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0915 06:38:12.969204 2523870 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 06:38:12.969614 2523870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/config.json ...
	I0915 06:38:12.969647 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/config.json: {Name:mkd56c679d1e8eeb25c48c5bb5d09233f14404e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:12.984555 2523870 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:38:12.984708 2523870 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:38:12.984732 2523870 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:38:12.984740 2523870 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:38:12.984748 2523870 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:38:12.984758 2523870 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:38:30.356936 2523870 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:38:30.356980 2523870 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:38:30.357009 2523870 start.go:360] acquireMachinesLock for addons-078133: {Name:mkd22383cf6e30905104727dd6882efae296baf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:38:30.357138 2523870 start.go:364] duration metric: took 107.583µs to acquireMachinesLock for "addons-078133"
	I0915 06:38:30.357171 2523870 start.go:93] Provisioning new machine with config: &{Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:38:30.357256 2523870 start.go:125] createHost starting for "" (driver="docker")
	I0915 06:38:30.358886 2523870 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 06:38:30.359147 2523870 start.go:159] libmachine.API.Create for "addons-078133" (driver="docker")
	I0915 06:38:30.359182 2523870 client.go:168] LocalClient.Create starting
	I0915 06:38:30.359309 2523870 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem
	I0915 06:38:31.028935 2523870 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem
	I0915 06:38:31.157412 2523870 cli_runner.go:164] Run: docker network inspect addons-078133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 06:38:31.173542 2523870 cli_runner.go:211] docker network inspect addons-078133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 06:38:31.173630 2523870 network_create.go:284] running [docker network inspect addons-078133] to gather additional debugging logs...
	I0915 06:38:31.173652 2523870 cli_runner.go:164] Run: docker network inspect addons-078133
	W0915 06:38:31.189395 2523870 cli_runner.go:211] docker network inspect addons-078133 returned with exit code 1
	I0915 06:38:31.189428 2523870 network_create.go:287] error running [docker network inspect addons-078133]: docker network inspect addons-078133: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-078133 not found
	I0915 06:38:31.189442 2523870 network_create.go:289] output of [docker network inspect addons-078133]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-078133 not found
	
	** /stderr **
	I0915 06:38:31.189539 2523870 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:38:31.205841 2523870 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001792940}
	I0915 06:38:31.205885 2523870 network_create.go:124] attempt to create docker network addons-078133 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 06:38:31.205944 2523870 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-078133 addons-078133
	I0915 06:38:31.304079 2523870 network_create.go:108] docker network addons-078133 192.168.49.0/24 created
	I0915 06:38:31.304113 2523870 kic.go:121] calculated static IP "192.168.49.2" for the "addons-078133" container
	I0915 06:38:31.304203 2523870 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 06:38:31.322468 2523870 cli_runner.go:164] Run: docker volume create addons-078133 --label name.minikube.sigs.k8s.io=addons-078133 --label created_by.minikube.sigs.k8s.io=true
	I0915 06:38:31.345040 2523870 oci.go:103] Successfully created a docker volume addons-078133
	I0915 06:38:31.345137 2523870 cli_runner.go:164] Run: docker run --rm --name addons-078133-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-078133 --entrypoint /usr/bin/test -v addons-078133:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 06:38:33.575685 2523870 cli_runner.go:217] Completed: docker run --rm --name addons-078133-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-078133 --entrypoint /usr/bin/test -v addons-078133:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (2.230494087s)
	I0915 06:38:33.575720 2523870 oci.go:107] Successfully prepared a docker volume addons-078133
	I0915 06:38:33.575744 2523870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:38:33.575763 2523870 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 06:38:33.575830 2523870 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-078133:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 06:38:37.682758 2523870 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-078133:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.10688552s)
	I0915 06:38:37.682789 2523870 kic.go:203] duration metric: took 4.107023149s to extract preloaded images to volume ...
	W0915 06:38:37.682941 2523870 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0915 06:38:37.683057 2523870 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 06:38:37.735978 2523870 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-078133 --name addons-078133 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-078133 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-078133 --network addons-078133 --ip 192.168.49.2 --volume addons-078133:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 06:38:38.073869 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Running}}
	I0915 06:38:38.096611 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:38:38.117014 2523870 cli_runner.go:164] Run: docker exec addons-078133 stat /var/lib/dpkg/alternatives/iptables
	I0915 06:38:38.193401 2523870 oci.go:144] the created container "addons-078133" has a running status.
	I0915 06:38:38.193429 2523870 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa...
	I0915 06:38:40.103212 2523870 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 06:38:40.124321 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:38:40.145609 2523870 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 06:38:40.145635 2523870 kic_runner.go:114] Args: [docker exec --privileged addons-078133 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 06:38:40.201133 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:38:40.223083 2523870 machine.go:93] provisionDockerMachine start ...
	I0915 06:38:40.223185 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:40.248426 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:40.248710 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:40.248727 2523870 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:38:40.384623 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-078133
	
	I0915 06:38:40.384649 2523870 ubuntu.go:169] provisioning hostname "addons-078133"
	I0915 06:38:40.384719 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:40.402539 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:40.402807 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:40.402827 2523870 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-078133 && echo "addons-078133" | sudo tee /etc/hostname
	I0915 06:38:40.553443 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-078133
	
	I0915 06:38:40.553586 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:40.571125 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:40.571387 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:40.571403 2523870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-078133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-078133/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-078133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:38:40.709939 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:38:40.709969 2523870 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-2517725/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-2517725/.minikube}
	I0915 06:38:40.710052 2523870 ubuntu.go:177] setting up certificates
	I0915 06:38:40.710065 2523870 provision.go:84] configureAuth start
	I0915 06:38:40.710167 2523870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-078133
	I0915 06:38:40.728157 2523870 provision.go:143] copyHostCerts
	I0915 06:38:40.728258 2523870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem (1082 bytes)
	I0915 06:38:40.728439 2523870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem (1123 bytes)
	I0915 06:38:40.728531 2523870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem (1675 bytes)
	I0915 06:38:40.728606 2523870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem org=jenkins.addons-078133 san=[127.0.0.1 192.168.49.2 addons-078133 localhost minikube]
	I0915 06:38:42.353273 2523870 provision.go:177] copyRemoteCerts
	I0915 06:38:42.353353 2523870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:38:42.353400 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.373293 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:42.471278 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 06:38:42.497795 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:38:42.522600 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 06:38:42.547736 2523870 provision.go:87] duration metric: took 1.83765139s to configureAuth
	I0915 06:38:42.547820 2523870 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:38:42.548046 2523870 config.go:182] Loaded profile config "addons-078133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:38:42.548166 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.565534 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:42.565797 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:42.565821 2523870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 06:38:42.807672 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 06:38:42.807751 2523870 machine.go:96] duration metric: took 2.584641806s to provisionDockerMachine
	I0915 06:38:42.807788 2523870 client.go:171] duration metric: took 12.44858555s to LocalClient.Create
	I0915 06:38:42.807845 2523870 start.go:167] duration metric: took 12.448698434s to libmachine.API.Create "addons-078133"
	I0915 06:38:42.807872 2523870 start.go:293] postStartSetup for "addons-078133" (driver="docker")
	I0915 06:38:42.807911 2523870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:38:42.808014 2523870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:38:42.808114 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.826066 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:42.926144 2523870 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:38:42.930078 2523870 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:38:42.930114 2523870 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:38:42.930124 2523870 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:38:42.930131 2523870 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:38:42.930144 2523870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/addons for local assets ...
	I0915 06:38:42.930220 2523870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/files for local assets ...
	I0915 06:38:42.930252 2523870 start.go:296] duration metric: took 122.36099ms for postStartSetup
	I0915 06:38:42.930585 2523870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-078133
	I0915 06:38:42.948043 2523870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/config.json ...
	I0915 06:38:42.948387 2523870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:38:42.948443 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.965578 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:43.062057 2523870 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:38:43.066878 2523870 start.go:128] duration metric: took 12.709604826s to createHost
	I0915 06:38:43.066945 2523870 start.go:83] releasing machines lock for "addons-078133", held for 12.709793154s
	I0915 06:38:43.067058 2523870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-078133
	I0915 06:38:43.084231 2523870 ssh_runner.go:195] Run: cat /version.json
	I0915 06:38:43.084291 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:43.084556 2523870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:38:43.084638 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:43.110679 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:43.113521 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:43.205306 2523870 ssh_runner.go:195] Run: systemctl --version
	I0915 06:38:43.331819 2523870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 06:38:43.475451 2523870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:38:43.479654 2523870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:38:43.503032 2523870 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0915 06:38:43.503135 2523870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:38:43.549259 2523870 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0915 06:38:43.549327 2523870 start.go:495] detecting cgroup driver to use...
	I0915 06:38:43.549376 2523870 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:38:43.549460 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 06:38:43.568882 2523870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 06:38:43.581182 2523870 docker.go:217] disabling cri-docker service (if available) ...
	I0915 06:38:43.581292 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 06:38:43.595995 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 06:38:43.611893 2523870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 06:38:43.708103 2523870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 06:38:43.812378 2523870 docker.go:233] disabling docker service ...
	I0915 06:38:43.812466 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 06:38:43.833320 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 06:38:43.845521 2523870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 06:38:43.943839 2523870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 06:38:44.039910 2523870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 06:38:44.052271 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:38:44.069425 2523870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 06:38:44.069497 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.079718 2523870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 06:38:44.079845 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.090489 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.100780 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.111161 2523870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:38:44.120858 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.131104 2523870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.148858 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.159069 2523870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:38:44.168402 2523870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:38:44.177003 2523870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:38:44.265072 2523870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 06:38:44.374011 2523870 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 06:38:44.374133 2523870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 06:38:44.378540 2523870 start.go:563] Will wait 60s for crictl version
	I0915 06:38:44.378656 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:38:44.382546 2523870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:38:44.424234 2523870 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0915 06:38:44.424349 2523870 ssh_runner.go:195] Run: crio --version
	I0915 06:38:44.475232 2523870 ssh_runner.go:195] Run: crio --version
	I0915 06:38:44.519124 2523870 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0915 06:38:44.521747 2523870 cli_runner.go:164] Run: docker network inspect addons-078133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:38:44.537582 2523870 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 06:38:44.541419 2523870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:38:44.552857 2523870 kubeadm.go:883] updating cluster {Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:38:44.552984 2523870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:38:44.553046 2523870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:38:44.633055 2523870 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:38:44.633083 2523870 crio.go:433] Images already preloaded, skipping extraction
	I0915 06:38:44.633143 2523870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:38:44.673366 2523870 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:38:44.673388 2523870 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:38:44.673397 2523870 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0915 06:38:44.673491 2523870 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-078133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:38:44.673581 2523870 ssh_runner.go:195] Run: crio config
	I0915 06:38:44.732765 2523870 cni.go:84] Creating CNI manager for ""
	I0915 06:38:44.732858 2523870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:38:44.732877 2523870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:38:44.732902 2523870 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-078133 NodeName:addons-078133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:38:44.733049 2523870 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-078133"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:38:44.733130 2523870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:38:44.741946 2523870 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:38:44.742045 2523870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:38:44.750784 2523870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0915 06:38:44.770200 2523870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:38:44.789649 2523870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0915 06:38:44.808669 2523870 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:38:44.812327 2523870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:38:44.823008 2523870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:38:44.913291 2523870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:38:44.927747 2523870 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133 for IP: 192.168.49.2
	I0915 06:38:44.927778 2523870 certs.go:194] generating shared ca certs ...
	I0915 06:38:44.927795 2523870 certs.go:226] acquiring lock for ca certs: {Name:mk5e6b4b1562ab546f1aa57699f236200f49d7e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:44.928715 2523870 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key
	I0915 06:38:45.326164 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt ...
	I0915 06:38:45.326211 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt: {Name:mk5bc462617f9659ba52a2152c2f6ee2c4afd336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.326491 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key ...
	I0915 06:38:45.326511 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key: {Name:mke6fb53bd94c120122c79adc8bb1635818a4c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.326662 2523870 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key
	I0915 06:38:45.743346 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt ...
	I0915 06:38:45.743380 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt: {Name:mk061dad5fc3f04b4c5728856758e4e719a722f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.743581 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key ...
	I0915 06:38:45.743595 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key: {Name:mk8f4151cf3bb4e60b32b8767dc2cf5cf44a4505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.743681 2523870 certs.go:256] generating profile certs ...
	I0915 06:38:45.743744 2523870 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.key
	I0915 06:38:45.743762 2523870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt with IP's: []
	I0915 06:38:46.183135 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt ...
	I0915 06:38:46.183178 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: {Name:mkf0bebdecf567120b50e3d4771ed97fb5f77b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:46.184171 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.key ...
	I0915 06:38:46.184189 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.key: {Name:mkae22a5721ba63055014519e5295d510f1c607b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:46.184290 2523870 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b
	I0915 06:38:46.184313 2523870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 06:38:47.375989 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b ...
	I0915 06:38:47.376029 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b: {Name:mkbb0cbab611271bcaa81d025cb58e0f49d6b725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:47.376266 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b ...
	I0915 06:38:47.376282 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b: {Name:mk44cadca365ce4b4475fd5ecbd0d3a7ab4a5e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:47.376377 2523870 certs.go:381] copying /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b -> /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt
	I0915 06:38:47.376469 2523870 certs.go:385] copying /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b -> /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key
	I0915 06:38:47.376532 2523870 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key
	I0915 06:38:47.376553 2523870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt with IP's: []
	I0915 06:38:48.296446 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt ...
	I0915 06:38:48.296479 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt: {Name:mk03e5126ebac87175cd074a3278a221669ecd43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:48.296678 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key ...
	I0915 06:38:48.296694 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key: {Name:mk184d4436eb1531806b2bfcf3dbee00f090f348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:48.296914 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 06:38:48.296959 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem (1082 bytes)
	I0915 06:38:48.296989 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:38:48.297016 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem (1675 bytes)
	I0915 06:38:48.297633 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:38:48.326882 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:38:48.352922 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:38:48.378019 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 06:38:48.403101 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:38:48.427999 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 06:38:48.452962 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:38:48.477908 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 06:38:48.503859 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:38:48.530602 2523870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:38:48.549981 2523870 ssh_runner.go:195] Run: openssl version
	I0915 06:38:48.555953 2523870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:38:48.566111 2523870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:38:48.569738 2523870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:38 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:38:48.569808 2523870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:38:48.577078 2523870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:38:48.587122 2523870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:38:48.590775 2523870 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:38:48.590821 2523870 kubeadm.go:392] StartCluster: {Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:38:48.590906 2523870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 06:38:48.590965 2523870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 06:38:48.629289 2523870 cri.go:89] found id: ""
	I0915 06:38:48.629429 2523870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:38:48.638918 2523870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:38:48.648246 2523870 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 06:38:48.648316 2523870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:38:48.657387 2523870 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:38:48.657405 2523870 kubeadm.go:157] found existing configuration files:
	
	I0915 06:38:48.657462 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:38:48.666518 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:38:48.666640 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:38:48.675439 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:38:48.684448 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:38:48.684566 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:38:48.693351 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:38:48.702264 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:38:48.702338 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:38:48.711186 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:38:48.720567 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:38:48.720649 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:38:48.730182 2523870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 06:38:48.780919 2523870 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:38:48.781052 2523870 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:38:48.802135 2523870 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0915 06:38:48.802289 2523870 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0915 06:38:48.802372 2523870 kubeadm.go:310] OS: Linux
	I0915 06:38:48.802466 2523870 kubeadm.go:310] CGROUPS_CPU: enabled
	I0915 06:38:48.802552 2523870 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0915 06:38:48.802630 2523870 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0915 06:38:48.802710 2523870 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0915 06:38:48.802818 2523870 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0915 06:38:48.802915 2523870 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0915 06:38:48.803014 2523870 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0915 06:38:48.803111 2523870 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0915 06:38:48.803189 2523870 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0915 06:38:48.874483 2523870 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:38:48.874665 2523870 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:38:48.874796 2523870 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:38:48.883798 2523870 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:38:48.887479 2523870 out.go:235]   - Generating certificates and keys ...
	I0915 06:38:48.887581 2523870 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:38:48.887682 2523870 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:38:49.339220 2523870 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:38:49.759961 2523870 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:38:49.944078 2523870 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:38:50.140723 2523870 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:38:50.666643 2523870 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:38:50.666794 2523870 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-078133 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:38:51.163173 2523870 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:38:51.163312 2523870 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-078133 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:38:52.181466 2523870 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:38:53.099402 2523870 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:38:53.475256 2523870 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:38:53.475495 2523870 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:38:53.868399 2523870 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:38:54.581730 2523870 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:38:55.110775 2523870 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:38:55.547546 2523870 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:38:55.827561 2523870 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:38:55.828306 2523870 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:38:55.831902 2523870 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:38:55.835154 2523870 out.go:235]   - Booting up control plane ...
	I0915 06:38:55.835337 2523870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:38:55.835455 2523870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:38:55.836739 2523870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:38:55.846862 2523870 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:38:55.852654 2523870 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:38:55.852715 2523870 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:38:55.945745 2523870 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:38:55.945867 2523870 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:38:56.449913 2523870 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.018783ms
	I0915 06:38:56.450000 2523870 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:39:03.453388 2523870 kubeadm.go:310] [api-check] The API server is healthy after 7.001427516s
	I0915 06:39:03.470476 2523870 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:39:03.486771 2523870 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:39:03.522770 2523870 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:39:03.522970 2523870 kubeadm.go:310] [mark-control-plane] Marking the node addons-078133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:39:03.536015 2523870 kubeadm.go:310] [bootstrap-token] Using token: 4rqqjy.4t6rodzggmhhv6z7
	I0915 06:39:03.540612 2523870 out.go:235]   - Configuring RBAC rules ...
	I0915 06:39:03.540745 2523870 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:39:03.546080 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:39:03.556664 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:39:03.561376 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:39:03.565561 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:39:03.569472 2523870 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:39:03.858387 2523870 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:39:04.293335 2523870 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:39:04.857982 2523870 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:39:04.859195 2523870 kubeadm.go:310] 
	I0915 06:39:04.859277 2523870 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:39:04.859289 2523870 kubeadm.go:310] 
	I0915 06:39:04.859390 2523870 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:39:04.859410 2523870 kubeadm.go:310] 
	I0915 06:39:04.859436 2523870 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:39:04.859496 2523870 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:39:04.859547 2523870 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:39:04.859551 2523870 kubeadm.go:310] 
	I0915 06:39:04.859605 2523870 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:39:04.859610 2523870 kubeadm.go:310] 
	I0915 06:39:04.859656 2523870 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:39:04.859661 2523870 kubeadm.go:310] 
	I0915 06:39:04.859713 2523870 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:39:04.859787 2523870 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:39:04.859854 2523870 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:39:04.859859 2523870 kubeadm.go:310] 
	I0915 06:39:04.859942 2523870 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:39:04.860018 2523870 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:39:04.860024 2523870 kubeadm.go:310] 
	I0915 06:39:04.860106 2523870 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4rqqjy.4t6rodzggmhhv6z7 \
	I0915 06:39:04.860208 2523870 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f02174f41dc6c5be174745b50e9cc9798f9f759608b7a0f4d9403600d367dc26 \
	I0915 06:39:04.860228 2523870 kubeadm.go:310] 	--control-plane 
	I0915 06:39:04.860233 2523870 kubeadm.go:310] 
	I0915 06:39:04.860316 2523870 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:39:04.860321 2523870 kubeadm.go:310] 
	I0915 06:39:04.860401 2523870 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4rqqjy.4t6rodzggmhhv6z7 \
	I0915 06:39:04.860502 2523870 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f02174f41dc6c5be174745b50e9cc9798f9f759608b7a0f4d9403600d367dc26 
	I0915 06:39:04.863766 2523870 kubeadm.go:310] W0915 06:38:48.777179    1185 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:39:04.864101 2523870 kubeadm.go:310] W0915 06:38:48.777944    1185 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:39:04.864322 2523870 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0915 06:39:04.864429 2523870 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:39:04.864452 2523870 cni.go:84] Creating CNI manager for ""
	I0915 06:39:04.864461 2523870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:39:04.867489 2523870 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0915 06:39:04.870221 2523870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0915 06:39:04.874336 2523870 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0915 06:39:04.874362 2523870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0915 06:39:04.894284 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0915 06:39:05.208677 2523870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:39:05.208832 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:05.208913 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-078133 minikube.k8s.io/updated_at=2024_09_15T06_39_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-078133 minikube.k8s.io/primary=true
	I0915 06:39:05.363687 2523870 ops.go:34] apiserver oom_adj: -16
	I0915 06:39:05.363789 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:05.864408 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:06.363995 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:06.864868 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:07.364405 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:07.864339 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:08.364323 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:08.863944 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:09.038552 2523870 kubeadm.go:1113] duration metric: took 3.829784576s to wait for elevateKubeSystemPrivileges
	I0915 06:39:09.038581 2523870 kubeadm.go:394] duration metric: took 20.447764237s to StartCluster
	I0915 06:39:09.038600 2523870 settings.go:142] acquiring lock: {Name:mka250035ae8fe54edf72ffd2d620ea51b449138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:39:09.038726 2523870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 06:39:09.039111 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/kubeconfig: {Name:mkc3f194059147bb4fbadd341bbbabf67fee0985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:39:09.039939 2523870 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:39:09.040131 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:39:09.040325 2523870 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:39:09.040408 2523870 config.go:182] Loaded profile config "addons-078133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:39:09.040435 2523870 addons.go:69] Setting yakd=true in profile "addons-078133"
	I0915 06:39:09.040446 2523870 addons.go:69] Setting inspektor-gadget=true in profile "addons-078133"
	I0915 06:39:09.040451 2523870 addons.go:234] Setting addon yakd=true in "addons-078133"
	I0915 06:39:09.040456 2523870 addons.go:234] Setting addon inspektor-gadget=true in "addons-078133"
	I0915 06:39:09.040480 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.040485 2523870 addons.go:69] Setting cloud-spanner=true in profile "addons-078133"
	I0915 06:39:09.040495 2523870 addons.go:234] Setting addon cloud-spanner=true in "addons-078133"
	I0915 06:39:09.040508 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041050 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041482 2523870 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-078133"
	I0915 06:39:09.041560 2523870 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-078133"
	I0915 06:39:09.041613 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041647 2523870 addons.go:69] Setting metrics-server=true in profile "addons-078133"
	I0915 06:39:09.041912 2523870 addons.go:234] Setting addon metrics-server=true in "addons-078133"
	I0915 06:39:09.041934 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.042360 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.042974 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041662 2523870 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-078133"
	I0915 06:39:09.043422 2523870 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-078133"
	I0915 06:39:09.043458 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.044071 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.052905 2523870 out.go:177] * Verifying Kubernetes components...
	I0915 06:39:09.041670 2523870 addons.go:69] Setting registry=true in profile "addons-078133"
	I0915 06:39:09.053360 2523870 addons.go:234] Setting addon registry=true in "addons-078133"
	I0915 06:39:09.041677 2523870 addons.go:69] Setting storage-provisioner=true in profile "addons-078133"
	I0915 06:39:09.053594 2523870 addons.go:234] Setting addon storage-provisioner=true in "addons-078133"
	I0915 06:39:09.053698 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041685 2523870 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-078133"
	I0915 06:39:09.056926 2523870 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-078133"
	I0915 06:39:09.057295 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.062965 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041693 2523870 addons.go:69] Setting volcano=true in profile "addons-078133"
	I0915 06:39:09.065091 2523870 addons.go:234] Setting addon volcano=true in "addons-078133"
	I0915 06:39:09.065130 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.065593 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.063209 2523870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:39:09.040480 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041789 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041702 2523870 addons.go:69] Setting volumesnapshots=true in profile "addons-078133"
	I0915 06:39:09.085273 2523870 addons.go:234] Setting addon volumesnapshots=true in "addons-078133"
	I0915 06:39:09.085333 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.085846 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041796 2523870 addons.go:69] Setting gcp-auth=true in profile "addons-078133"
	I0915 06:39:09.086076 2523870 mustload.go:65] Loading cluster: addons-078133
	I0915 06:39:09.086239 2523870 config.go:182] Loaded profile config "addons-078133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:39:09.086465 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041801 2523870 addons.go:69] Setting default-storageclass=true in profile "addons-078133"
	I0915 06:39:09.094560 2523870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-078133"
	I0915 06:39:09.094904 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041806 2523870 addons.go:69] Setting ingress=true in profile "addons-078133"
	I0915 06:39:09.105001 2523870 addons.go:234] Setting addon ingress=true in "addons-078133"
	I0915 06:39:09.105055 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.105584 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041811 2523870 addons.go:69] Setting ingress-dns=true in profile "addons-078133"
	I0915 06:39:09.105828 2523870 addons.go:234] Setting addon ingress-dns=true in "addons-078133"
	I0915 06:39:09.105864 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.106291 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.063670 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.139706 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.157805 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.241029 2523870 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:39:09.244895 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:39:09.244991 2523870 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:39:09.245101 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.252566 2523870 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:39:09.255882 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:39:09.255913 2523870 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:39:09.255985 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.305949 2523870 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:39:09.309848 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:39:09.310085 2523870 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:39:09.310113 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:39:09.310186 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.322978 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:39:09.329149 2523870 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-078133"
	I0915 06:39:09.329212 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.329744 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.346286 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:39:09.349169 2523870 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:39:09.349337 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:39:09.349376 2523870 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:39:09.349484 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.354629 2523870 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:39:09.354704 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:39:09.354789 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.367623 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:39:09.389092 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:39:09.389347 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:39:09.389610 2523870 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:39:09.389626 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:39:09.389688 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	W0915 06:39:09.391591 2523870 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0915 06:39:09.391963 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.396501 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:39:09.398337 2523870 addons.go:234] Setting addon default-storageclass=true in "addons-078133"
	I0915 06:39:09.398383 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.398799 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.406062 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:39:09.406277 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 06:39:09.406914 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:39:09.411306 2523870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:39:09.411331 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:39:09.411398 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.432227 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:39:09.434825 2523870 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:39:09.435043 2523870 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:39:09.435065 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:39:09.435134 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.437472 2523870 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:39:09.437496 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:39:09.437566 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.453082 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:39:09.457762 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:39:09.462413 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:39:09.468969 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:39:09.471555 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:39:09.471593 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:39:09.471669 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.482934 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:39:09.483223 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.484125 2523870 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:39:09.487259 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:39:09.487279 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:39:09.487344 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.520984 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.593269 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.596982 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.597062 2523870 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:39:09.599402 2523870 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:39:09.599428 2523870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:39:09.599501 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.602275 2523870 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:39:09.604798 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.607521 2523870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:39:09.607774 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:39:09.608168 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.621024 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.634782 2523870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:39:09.641915 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.644998 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.679310 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.699858 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.709617 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.725574 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.726343 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.967170 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:39:09.967196 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:39:10.051753 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:39:10.051784 2523870 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:39:10.123585 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:39:10.131017 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:39:10.155112 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:39:10.155140 2523870 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:39:10.162216 2523870 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:39:10.162242 2523870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:39:10.168215 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:39:10.200571 2523870 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:39:10.200648 2523870 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:39:10.204330 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:39:10.207613 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:39:10.207693 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:39:10.221132 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:39:10.221213 2523870 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:39:10.229090 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:39:10.232441 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:39:10.236135 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:39:10.253555 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:39:10.253632 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:39:10.314939 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:39:10.315016 2523870 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:39:10.319329 2523870 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:39:10.319406 2523870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:39:10.359489 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:39:10.359560 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:39:10.377308 2523870 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:39:10.377381 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:39:10.388486 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:39:10.388563 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:39:10.430613 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:39:10.430693 2523870 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:39:10.536291 2523870 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:39:10.536370 2523870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:39:10.546167 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:39:10.563456 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:39:10.563540 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:39:10.590878 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:39:10.595036 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:39:10.595130 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:39:10.651963 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:39:10.652038 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:39:10.780564 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:39:10.780649 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:39:10.783802 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:39:10.783880 2523870 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:39:10.787389 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:39:10.787467 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:39:10.855263 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:39:10.910709 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:39:10.910790 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:39:10.943539 2523870 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:39:10.943619 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:39:10.947004 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:39:10.947081 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:39:10.975982 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:39:10.976062 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:39:11.041384 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:39:11.041456 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:39:11.041859 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:39:11.041910 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:39:11.067123 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:39:11.169804 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:39:11.187844 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:39:11.187928 2523870 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:39:11.413987 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:39:11.414061 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:39:11.545139 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:39:11.545161 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:39:11.690868 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:39:11.690891 2523870 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:39:11.861968 2523870 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.378992448s)
	I0915 06:39:11.861995 2523870 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0915 06:39:11.863108 2523870 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.22829938s)
	I0915 06:39:11.863907 2523870 node_ready.go:35] waiting up to 6m0s for node "addons-078133" to be "Ready" ...
	I0915 06:39:11.925007 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:39:12.734191 2523870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-078133" context rescaled to 1 replicas
	I0915 06:39:13.816313 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.692684755s)
	I0915 06:39:13.816426 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.685386035s)
	I0915 06:39:13.816486 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.648202296s)
	I0915 06:39:13.948928 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:14.413876 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.209453947s)
	I0915 06:39:15.491159 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.261979832s)
	I0915 06:39:15.491246 2523870 addons.go:475] Verifying addon ingress=true in "addons-078133"
	I0915 06:39:15.491560 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.259043386s)
	I0915 06:39:15.491668 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.255460851s)
	I0915 06:39:15.491897 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.945656931s)
	I0915 06:39:15.491911 2523870 addons.go:475] Verifying addon metrics-server=true in "addons-078133"
	I0915 06:39:15.491940 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.900983898s)
	I0915 06:39:15.491947 2523870 addons.go:475] Verifying addon registry=true in "addons-078133"
	I0915 06:39:15.492354 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.637011622s)
	I0915 06:39:15.492468 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.425238269s)
	I0915 06:39:15.492570 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.322686637s)
	W0915 06:39:15.492507 2523870 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:39:15.492702 2523870 retry.go:31] will retry after 365.365183ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:39:15.494865 2523870 out.go:177] * Verifying registry addon...
	I0915 06:39:15.494883 2523870 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-078133 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:39:15.494996 2523870 out.go:177] * Verifying ingress addon...
	I0915 06:39:15.499126 2523870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:39:15.499146 2523870 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:39:15.508673 2523870 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:39:15.508703 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:15.509966 2523870 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:39:15.510037 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0915 06:39:15.524385 2523870 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0915 06:39:15.858832 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:39:15.879445 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.954334967s)
	I0915 06:39:15.879493 2523870 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-078133"
	I0915 06:39:15.882304 2523870 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:39:15.886174 2523870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:39:15.939391 2523870 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:39:15.939465 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:16.048275 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:16.059314 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:16.367719 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:16.390881 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:16.513275 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:16.521440 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:16.891066 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:17.005641 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:17.007645 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:17.130505 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.27161243s)
	I0915 06:39:17.390841 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:17.503165 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:17.504695 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:17.890914 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:18.008065 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:18.009583 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:18.371574 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:18.390782 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:18.506247 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:18.506438 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:18.560915 2523870 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:39:18.560997 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:18.579856 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:18.744915 2523870 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:39:18.764474 2523870 addons.go:234] Setting addon gcp-auth=true in "addons-078133"
	I0915 06:39:18.764523 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:18.765025 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:18.782156 2523870 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:39:18.782213 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:18.801456 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:18.904312 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:18.904653 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:39:18.907445 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:39:18.910534 2523870 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:39:18.910565 2523870 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:39:18.936545 2523870 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:39:18.936579 2523870 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:39:18.963991 2523870 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:39:18.964067 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:39:19.000463 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:39:19.016170 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:19.018516 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:19.395257 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:19.504167 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:19.505568 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:19.690148 2523870 addons.go:475] Verifying addon gcp-auth=true in "addons-078133"
	I0915 06:39:19.694850 2523870 out.go:177] * Verifying gcp-auth addon...
	I0915 06:39:19.714020 2523870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:39:19.735242 2523870 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:39:19.735265 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:19.889636 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:20.006962 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:20.015633 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:20.219761 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:20.390783 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:20.503049 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:20.503934 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:20.717230 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:20.867048 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:20.890525 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:21.008560 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:21.010633 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:21.218675 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:21.398063 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:21.503634 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:21.505331 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:21.718256 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:21.891285 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:22.004961 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:22.006610 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:22.219382 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:22.391119 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:22.505105 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:22.506699 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:22.718469 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:22.868045 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:22.891039 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:23.006023 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:23.007330 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:23.217716 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:23.392441 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:23.504360 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:23.505442 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:23.718077 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:23.890026 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:24.009952 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:24.011764 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:24.217196 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:24.390856 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:24.503823 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:24.504306 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:24.717265 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:24.890322 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:25.004815 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:25.009217 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:25.218931 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:25.368330 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:25.390248 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:25.504490 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:25.504784 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:25.718031 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:25.889897 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:26.006178 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:26.009321 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:26.217851 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:26.390260 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:26.503645 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:26.503929 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:26.717228 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:26.889966 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:27.005860 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:27.006534 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:27.217232 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:27.391379 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:27.503218 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:27.504180 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:27.717918 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:27.867581 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:27.890599 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:28.008041 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:28.010528 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:28.218488 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:28.390431 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:28.503223 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:28.503754 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:28.718274 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:28.890278 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:29.004652 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:29.006990 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:29.217428 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:29.390775 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:29.503442 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:29.504951 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:29.717347 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:29.867767 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:29.889736 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:30.013658 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:30.013836 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:30.219186 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:30.391799 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:30.503268 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:30.504148 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:30.717747 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:30.890714 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:31.004930 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:31.005992 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:31.217720 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:31.390558 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:31.503622 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:31.504583 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:31.718229 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:31.890555 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:32.008758 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:32.009715 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:32.217800 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:32.367710 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:32.389503 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:32.504290 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:32.504617 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:32.718358 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:32.890232 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:33.013792 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:33.014310 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:33.217772 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:33.389964 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:33.503854 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:33.504297 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:33.718265 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:33.890626 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:34.005812 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:34.007225 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:34.218580 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:34.368052 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:34.389929 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:34.502638 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:34.503613 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:34.718366 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:34.891557 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:35.009694 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:35.021653 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:35.218731 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:35.390461 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:35.504550 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:35.506436 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:35.718202 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:35.890352 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:36.006752 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:36.008736 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:36.217910 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:36.390208 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:36.503044 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:36.503488 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:36.717595 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:36.867872 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:36.890611 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:37.007512 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:37.008318 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:37.217196 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:37.389970 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:37.502759 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:37.503952 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:37.717068 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:37.890324 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:38.008794 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:38.009771 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:38.217829 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:38.389937 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:38.503592 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:38.504486 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:38.717991 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:38.890450 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:39.008193 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:39.009653 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:39.226065 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:39.367638 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:39.390621 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:39.507715 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:39.508472 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:39.718445 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:39.890449 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:40.011215 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:40.031551 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:40.218036 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:40.390520 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:40.506183 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:40.507671 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:40.718484 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:40.889891 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:41.006703 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:41.007677 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:41.217954 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:41.368038 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:41.390857 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:41.502948 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:41.503795 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:41.723269 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:41.890629 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:42.009905 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:42.010464 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:42.217795 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:42.390908 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:42.503860 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:42.504836 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:42.717714 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:42.890761 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:43.007858 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:43.008735 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:43.217902 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:43.389922 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:43.502784 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:43.503593 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:43.717585 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:43.868251 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:43.890507 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:44.014356 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:44.014574 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:44.218704 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:44.390683 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:44.503015 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:44.503922 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:44.717370 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:44.890339 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:45.006474 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:45.008151 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:45.218416 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:45.390283 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:45.503879 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:45.504683 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:45.717454 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:45.890475 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:46.008464 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:46.011999 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:46.217682 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:46.367996 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:46.390451 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:46.503110 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:46.504008 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:46.717277 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:46.890358 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:47.006411 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:47.007378 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:47.217355 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:47.390037 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:47.503022 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:47.503858 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:47.717276 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:47.890100 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:48.011525 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:48.014501 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:48.217881 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:48.390415 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:48.502868 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:48.503714 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:48.717603 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:48.868116 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:48.889580 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:49.007659 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:49.008613 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:49.221630 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:49.390355 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:49.503859 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:49.504764 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:49.717278 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:49.890162 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:50.016362 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:50.016914 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:50.218199 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:50.390287 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:50.503347 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:50.504044 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:50.717043 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:50.890485 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:51.049786 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:51.062794 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:51.224379 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:51.397876 2523870 node_ready.go:49] node "addons-078133" has status "Ready":"True"
	I0915 06:39:51.397903 2523870 node_ready.go:38] duration metric: took 39.533978864s for node "addons-078133" to be "Ready" ...
	I0915 06:39:51.397914 2523870 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:39:51.427264 2523870 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:39:51.427292 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:51.464114 2523870 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7vkbz" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:51.590510 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:51.591035 2523870 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:39:51.591054 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:51.769687 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:51.901853 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:52.030916 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:52.032462 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:52.223429 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:52.391680 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:52.523484 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:52.524528 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:52.718617 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:52.891172 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:52.971134 2523870 pod_ready.go:93] pod "coredns-7c65d6cfc9-7vkbz" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.971160 2523870 pod_ready.go:82] duration metric: took 1.507009842s for pod "coredns-7c65d6cfc9-7vkbz" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.971209 2523870 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.977562 2523870 pod_ready.go:93] pod "etcd-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.977605 2523870 pod_ready.go:82] duration metric: took 6.380539ms for pod "etcd-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.977622 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.984413 2523870 pod_ready.go:93] pod "kube-apiserver-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.984443 2523870 pod_ready.go:82] duration metric: took 6.771659ms for pod "kube-apiserver-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.984456 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.990371 2523870 pod_ready.go:93] pod "kube-controller-manager-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.990397 2523870 pod_ready.go:82] duration metric: took 5.931499ms for pod "kube-controller-manager-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.990414 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fjj4k" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.996392 2523870 pod_ready.go:93] pod "kube-proxy-fjj4k" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.996424 2523870 pod_ready.go:82] duration metric: took 6.001429ms for pod "kube-proxy-fjj4k" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.996438 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:53.009143 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:53.010564 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:53.218339 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:53.368479 2523870 pod_ready.go:93] pod "kube-scheduler-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:53.368505 2523870 pod_ready.go:82] duration metric: took 372.058726ms for pod "kube-scheduler-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:53.368517 2523870 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:53.391482 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:53.508086 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:53.509396 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:53.719334 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:53.893534 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:54.008069 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:54.009214 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:54.220473 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:54.393145 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:54.506031 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:54.515648 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:54.718589 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:54.892614 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:55.007453 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:55.010827 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:55.222250 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:55.376527 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:39:55.392570 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:55.506637 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:55.508411 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:55.718235 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:55.891769 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:56.006852 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:56.009587 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:56.219174 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:56.390762 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:56.504692 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:56.506044 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:56.718089 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:56.901935 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:57.005894 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:57.007119 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:57.218515 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:57.392369 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:57.506920 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:57.508332 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:57.717995 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:57.875345 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:39:57.892007 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:58.006101 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:58.006268 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:58.226454 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:58.392438 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:58.506852 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:58.507582 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:58.718390 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:58.893006 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:59.004892 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:59.007281 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:59.218349 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:59.391747 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:59.507785 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:59.511002 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:59.718650 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:59.876003 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:39:59.892455 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:00.007347 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:00.009528 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:00.245436 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:00.508623 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:00.535863 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:00.537735 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:00.723119 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:00.901726 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:01.012175 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:01.013228 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:01.223627 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:01.397325 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:01.508050 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:01.509577 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:01.719168 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:01.876338 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:01.893359 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:02.016637 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:02.019038 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:02.219910 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:02.392659 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:02.529881 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:02.531435 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:02.719132 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:02.893546 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:03.012685 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:03.014579 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:03.224218 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:03.391738 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:03.508749 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:03.512180 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:03.719109 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:03.876617 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:03.893892 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:04.012887 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:04.014341 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:04.218097 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:04.392063 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:04.503904 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:04.504946 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:04.717690 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:04.891182 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:05.010877 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:05.011628 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:05.217387 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:05.399458 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:05.505163 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:05.506344 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:05.721686 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:05.876868 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:05.893999 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:06.009105 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:06.010539 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:06.218863 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:06.391805 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:06.504869 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:06.505897 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:06.717807 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:06.900869 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:07.011645 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:07.012942 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:07.217184 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:07.391107 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:07.504957 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:07.505322 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:07.717633 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:07.899952 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:08.011925 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:08.013069 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:08.217268 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:08.376650 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:08.397803 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:08.505492 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:08.506686 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:08.718464 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:08.891562 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:09.005433 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:09.007473 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:09.218676 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:09.393023 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:09.504274 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:09.504893 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:09.720362 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:09.900991 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:10.009437 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:10.010607 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:10.217916 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:10.391420 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:10.503362 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:10.504726 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:10.718554 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:10.875439 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:10.891030 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:11.006830 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:11.007545 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:11.218297 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:11.394784 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:11.505674 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:11.507120 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:11.717797 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:11.892090 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:12.012833 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:12.014665 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:12.218750 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:12.391423 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:12.504227 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:12.505056 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:12.717972 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:12.891091 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:13.004369 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:13.006898 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:13.217462 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:13.375022 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:13.391234 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:13.505887 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:13.509132 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:13.719365 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:13.892337 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:14.027805 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:14.029543 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:14.218097 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:14.394284 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:14.503684 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:14.504768 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:14.720283 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:14.891679 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:15.005388 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:15.108689 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:15.218457 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:15.375762 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:15.392211 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:15.504886 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:15.505624 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:15.717476 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:15.891681 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:16.009431 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:16.012968 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:16.218788 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:16.391091 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:16.505725 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:16.508000 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:16.719209 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:16.893291 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:17.011839 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:17.012867 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:17.219510 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:17.376009 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:17.392084 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:17.506117 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:17.509472 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:17.718736 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:17.892359 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:18.011278 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:18.011976 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:18.218284 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:18.391739 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:18.504420 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:18.505593 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:18.718246 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:18.891814 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:19.009582 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:19.010144 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:19.217852 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:19.391270 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:19.505094 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:19.505450 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:19.717938 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:19.876031 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:19.892583 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:20.022672 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:20.023496 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:20.219111 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:20.391707 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:20.504488 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:20.505535 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:20.735971 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:20.894400 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:21.005148 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:21.006658 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:21.218083 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:21.392231 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:21.505987 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:21.507535 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:21.719497 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:21.876166 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:21.895827 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:22.005926 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:22.015854 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:22.218563 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:22.392508 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:22.505920 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:22.507345 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:22.721627 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:22.891650 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:23.007542 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:23.011624 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:23.218496 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:23.424380 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:23.517867 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:23.519670 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:23.717708 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:23.877493 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:23.892213 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:24.009293 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:24.010054 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:24.218495 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:24.391439 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:24.505968 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:24.507321 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:24.718282 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:24.892049 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:25.021077 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:25.027241 2523870 kapi.go:107] duration metric: took 1m9.528110217s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:40:25.217764 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:25.390797 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:25.503618 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:25.717901 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:25.893381 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:26.009074 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:26.217567 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:26.374885 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:26.391801 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:26.503999 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:26.722475 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:26.890983 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:27.006887 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:27.219513 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:27.392340 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:27.504077 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:27.718269 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:27.892904 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:28.004023 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:28.219042 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:28.376299 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:28.399220 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:28.504498 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:28.718964 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:28.896135 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:29.006026 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:29.218032 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:29.393178 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:29.509539 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:29.718139 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:29.893776 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:30.005062 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:30.234708 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:30.393094 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:30.505057 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:30.718540 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:30.876680 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:30.893933 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:31.008054 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:31.219075 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:31.404942 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:31.505691 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:31.718932 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:31.893105 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:32.009801 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:32.219037 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:32.393111 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:32.504180 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:32.719026 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:32.876996 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:32.892930 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:33.005692 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:33.217717 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:33.391361 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:33.504310 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:33.718712 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:33.891841 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:34.005309 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:34.219141 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:34.423022 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:34.503613 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:34.726243 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:34.896767 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:35.004767 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:35.218452 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:35.378703 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:35.398054 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:35.504269 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:35.719379 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:35.896417 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:36.020512 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:36.218661 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:36.393103 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:36.505162 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:36.718101 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:36.895403 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:37.007273 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:37.218042 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:37.392145 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:37.503483 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:37.718902 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:37.875591 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:37.891548 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:38.005969 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:38.217510 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:38.391997 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:38.503726 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:38.718614 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:38.891369 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:39.005328 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:39.217328 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:39.391927 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:39.504617 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:39.718749 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:39.876161 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:39.891185 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:40.004226 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:40.218071 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:40.392301 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:40.505556 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:40.717967 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:40.892236 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:41.005881 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:41.218764 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:41.395672 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:41.503746 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:41.719115 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:41.876921 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:41.895525 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:42.011166 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:42.218028 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:42.392438 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:42.503989 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:42.718426 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:42.891965 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:43.005470 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:43.218325 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:43.391674 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:43.503672 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:43.718546 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:43.891279 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:44.009592 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:44.218862 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:44.377134 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:44.391140 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:44.504636 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:44.718865 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:44.892732 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:45.005120 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:45.220362 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:45.393290 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:45.504799 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:45.719264 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:45.892303 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:46.010041 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:46.222170 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:46.392718 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:46.507034 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:46.719634 2523870 kapi.go:107] duration metric: took 1m27.005612282s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:40:46.721255 2523870 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-078133 cluster.
	I0915 06:40:46.722663 2523870 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:40:46.723801 2523870 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:40:46.876708 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:46.894513 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:47.005594 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:47.392485 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:47.504081 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:47.897917 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:48.005531 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:48.391420 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:48.503783 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:48.878884 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:48.893603 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:49.007483 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:49.391911 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:49.505584 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:49.891537 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:50.012368 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:50.392057 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:50.503606 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:50.891754 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:51.004331 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:51.379225 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:51.391873 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:51.504975 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:51.892942 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:52.069383 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:52.397630 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:52.504476 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:52.891313 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:53.011566 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:53.392684 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:53.504669 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:53.875903 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:53.891954 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:54.006138 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:54.392101 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:54.503774 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:54.899918 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:55.006756 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:55.392260 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:55.504130 2523870 kapi.go:107] duration metric: took 1m40.004978236s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:40:55.892947 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:56.382504 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:56.392491 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:56.924548 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:57.393779 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:57.891466 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:58.392642 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:58.877042 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:58.891963 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:59.391610 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:59.893537 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:00.397105 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:00.904885 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:01.375303 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:01.391382 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:01.892308 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:02.392116 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:02.894530 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:03.375597 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:03.392955 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:03.891747 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:04.399605 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:04.891765 2523870 kapi.go:107] duration metric: took 1m49.0055889s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:41:04.894260 2523870 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0915 06:41:04.895478 2523870 addons.go:510] duration metric: took 1m55.855150005s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0915 06:41:05.875469 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:08.377139 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:10.875168 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:11.380090 2523870 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"True"
	I0915 06:41:11.380127 2523870 pod_ready.go:82] duration metric: took 1m18.011601636s for pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace to be "Ready" ...
	I0915 06:41:11.380141 2523870 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-cwx62" in "kube-system" namespace to be "Ready" ...
	I0915 06:41:11.415635 2523870 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-cwx62" in "kube-system" namespace has status "Ready":"True"
	I0915 06:41:11.415662 2523870 pod_ready.go:82] duration metric: took 35.513361ms for pod "nvidia-device-plugin-daemonset-cwx62" in "kube-system" namespace to be "Ready" ...
	I0915 06:41:11.415685 2523870 pod_ready.go:39] duration metric: took 1m20.01772025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:41:11.415708 2523870 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:41:11.415741 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:41:11.415815 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:41:11.495394 2523870 cri.go:89] found id: "e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:11.495424 2523870 cri.go:89] found id: ""
	I0915 06:41:11.495434 2523870 logs.go:276] 1 containers: [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6]
	I0915 06:41:11.495517 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.499500 2523870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:41:11.499585 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:41:11.550559 2523870 cri.go:89] found id: "aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:11.550594 2523870 cri.go:89] found id: ""
	I0915 06:41:11.550603 2523870 logs.go:276] 1 containers: [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6]
	I0915 06:41:11.550667 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.554309 2523870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:41:11.554399 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:41:11.601798 2523870 cri.go:89] found id: "85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:11.601821 2523870 cri.go:89] found id: ""
	I0915 06:41:11.601829 2523870 logs.go:276] 1 containers: [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c]
	I0915 06:41:11.601888 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.605508 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:41:11.605625 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:41:11.647917 2523870 cri.go:89] found id: "9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:11.647991 2523870 cri.go:89] found id: ""
	I0915 06:41:11.648013 2523870 logs.go:276] 1 containers: [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159]
	I0915 06:41:11.648110 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.651911 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:41:11.652032 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:41:11.698154 2523870 cri.go:89] found id: "7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:11.698186 2523870 cri.go:89] found id: ""
	I0915 06:41:11.698195 2523870 logs.go:276] 1 containers: [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee]
	I0915 06:41:11.698256 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.701917 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:41:11.701995 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:41:11.746530 2523870 cri.go:89] found id: "fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:11.746597 2523870 cri.go:89] found id: ""
	I0915 06:41:11.746615 2523870 logs.go:276] 1 containers: [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1]
	I0915 06:41:11.746685 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.750359 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:41:11.750457 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:41:11.793770 2523870 cri.go:89] found id: "0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:11.793794 2523870 cri.go:89] found id: ""
	I0915 06:41:11.793802 2523870 logs.go:276] 1 containers: [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725]
	I0915 06:41:11.793884 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.797463 2523870 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:41:11.797492 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:41:11.992092 2523870 logs.go:123] Gathering logs for etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] ...
	I0915 06:41:11.992123 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:12.054295 2523870 logs.go:123] Gathering logs for kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] ...
	I0915 06:41:12.054337 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:12.107869 2523870 logs.go:123] Gathering logs for kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] ...
	I0915 06:41:12.107906 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:12.152727 2523870 logs.go:123] Gathering logs for container status ...
	I0915 06:41:12.152760 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:41:12.209277 2523870 logs.go:123] Gathering logs for kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] ...
	I0915 06:41:12.209313 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:12.282525 2523870 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:41:12.282570 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:41:12.379304 2523870 logs.go:123] Gathering logs for kubelet ...
	I0915 06:41:12.379387 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0915 06:41:12.452980 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028288    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.453256 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.453428 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.453641 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.453826 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.454053 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:12.488341 2523870 logs.go:123] Gathering logs for dmesg ...
	I0915 06:41:12.488390 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:41:12.506041 2523870 logs.go:123] Gathering logs for kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] ...
	I0915 06:41:12.506071 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:12.563059 2523870 logs.go:123] Gathering logs for coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] ...
	I0915 06:41:12.563096 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:12.606199 2523870 logs.go:123] Gathering logs for kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] ...
	I0915 06:41:12.606234 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:12.648655 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:12.648683 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0915 06:41:12.648741 2523870 out.go:270] X Problems detected in kubelet:
	W0915 06:41:12.648758 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.648765 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.648780 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.648787 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.648799 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:12.648833 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:12.648843 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:41:22.649917 2523870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:41:22.664122 2523870 api_server.go:72] duration metric: took 2m13.624140746s to wait for apiserver process to appear ...
	I0915 06:41:22.664149 2523870 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:41:22.664188 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:41:22.664251 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:41:22.715271 2523870 cri.go:89] found id: "e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:22.715298 2523870 cri.go:89] found id: ""
	I0915 06:41:22.715308 2523870 logs.go:276] 1 containers: [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6]
	I0915 06:41:22.715367 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.718981 2523870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:41:22.719054 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:41:22.758523 2523870 cri.go:89] found id: "aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:22.758548 2523870 cri.go:89] found id: ""
	I0915 06:41:22.758558 2523870 logs.go:276] 1 containers: [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6]
	I0915 06:41:22.758622 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.762372 2523870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:41:22.762450 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:41:22.803919 2523870 cri.go:89] found id: "85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:22.803939 2523870 cri.go:89] found id: ""
	I0915 06:41:22.803946 2523870 logs.go:276] 1 containers: [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c]
	I0915 06:41:22.804003 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.807829 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:41:22.807902 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:41:22.846386 2523870 cri.go:89] found id: "9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:22.846461 2523870 cri.go:89] found id: ""
	I0915 06:41:22.846477 2523870 logs.go:276] 1 containers: [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159]
	I0915 06:41:22.846550 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.850418 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:41:22.850502 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:41:22.894080 2523870 cri.go:89] found id: "7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:22.894105 2523870 cri.go:89] found id: ""
	I0915 06:41:22.894113 2523870 logs.go:276] 1 containers: [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee]
	I0915 06:41:22.894173 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.898275 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:41:22.898353 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:41:22.938696 2523870 cri.go:89] found id: "fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:22.938717 2523870 cri.go:89] found id: ""
	I0915 06:41:22.938725 2523870 logs.go:276] 1 containers: [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1]
	I0915 06:41:22.938785 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.942715 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:41:22.942798 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:41:22.990421 2523870 cri.go:89] found id: "0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:22.990492 2523870 cri.go:89] found id: ""
	I0915 06:41:22.990514 2523870 logs.go:276] 1 containers: [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725]
	I0915 06:41:22.990602 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.994406 2523870 logs.go:123] Gathering logs for kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] ...
	I0915 06:41:22.994433 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:23.073513 2523870 logs.go:123] Gathering logs for etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] ...
	I0915 06:41:23.073551 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:23.141989 2523870 logs.go:123] Gathering logs for kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] ...
	I0915 06:41:23.142067 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:23.197032 2523870 logs.go:123] Gathering logs for kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] ...
	I0915 06:41:23.197109 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:23.242720 2523870 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:41:23.242756 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:41:23.337137 2523870 logs.go:123] Gathering logs for container status ...
	I0915 06:41:23.337178 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:41:23.394824 2523870 logs.go:123] Gathering logs for kubelet ...
	I0915 06:41:23.394853 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0915 06:41:23.446249 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028288    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.446518 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.446688 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.446894 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.447080 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.447305 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:23.482115 2523870 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:41:23.482149 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:41:23.634605 2523870 logs.go:123] Gathering logs for coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] ...
	I0915 06:41:23.634636 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:23.675844 2523870 logs.go:123] Gathering logs for kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] ...
	I0915 06:41:23.675873 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:23.723363 2523870 logs.go:123] Gathering logs for kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] ...
	I0915 06:41:23.723398 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:23.797568 2523870 logs.go:123] Gathering logs for dmesg ...
	I0915 06:41:23.797657 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:41:23.816018 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:23.816047 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0915 06:41:23.816107 2523870 out.go:270] X Problems detected in kubelet:
	W0915 06:41:23.816120 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.816132 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.816144 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.816154 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.816160 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:23.816172 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:23.816178 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:41:33.817587 2523870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 06:41:33.825225 2523870 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 06:41:33.826245 2523870 api_server.go:141] control plane version: v1.31.1
	I0915 06:41:33.826278 2523870 api_server.go:131] duration metric: took 11.162120505s to wait for apiserver health ...
	I0915 06:41:33.826288 2523870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:41:33.826312 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:41:33.826381 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:41:33.865811 2523870 cri.go:89] found id: "e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:33.865838 2523870 cri.go:89] found id: ""
	I0915 06:41:33.865847 2523870 logs.go:276] 1 containers: [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6]
	I0915 06:41:33.865905 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.869614 2523870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:41:33.869702 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:41:33.907874 2523870 cri.go:89] found id: "aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:33.907899 2523870 cri.go:89] found id: ""
	I0915 06:41:33.907907 2523870 logs.go:276] 1 containers: [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6]
	I0915 06:41:33.907963 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.911687 2523870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:41:33.911762 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:41:33.951105 2523870 cri.go:89] found id: "85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:33.951128 2523870 cri.go:89] found id: ""
	I0915 06:41:33.951137 2523870 logs.go:276] 1 containers: [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c]
	I0915 06:41:33.951196 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.954918 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:41:33.955022 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:41:33.994550 2523870 cri.go:89] found id: "9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:33.994574 2523870 cri.go:89] found id: ""
	I0915 06:41:33.994583 2523870 logs.go:276] 1 containers: [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159]
	I0915 06:41:33.994643 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.998722 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:41:33.998797 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:41:34.039134 2523870 cri.go:89] found id: "7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:34.039159 2523870 cri.go:89] found id: ""
	I0915 06:41:34.039167 2523870 logs.go:276] 1 containers: [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee]
	I0915 06:41:34.039230 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:34.043267 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:41:34.043394 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:41:34.084090 2523870 cri.go:89] found id: "fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:34.084114 2523870 cri.go:89] found id: ""
	I0915 06:41:34.084123 2523870 logs.go:276] 1 containers: [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1]
	I0915 06:41:34.084176 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:34.087813 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:41:34.087891 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:41:34.132606 2523870 cri.go:89] found id: "0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:34.132631 2523870 cri.go:89] found id: ""
	I0915 06:41:34.132639 2523870 logs.go:276] 1 containers: [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725]
	I0915 06:41:34.132712 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:34.136498 2523870 logs.go:123] Gathering logs for kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] ...
	I0915 06:41:34.136526 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:34.183368 2523870 logs.go:123] Gathering logs for kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] ...
	I0915 06:41:34.183400 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:34.226908 2523870 logs.go:123] Gathering logs for kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] ...
	I0915 06:41:34.226942 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:34.320748 2523870 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:41:34.320790 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:41:34.423086 2523870 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:41:34.423130 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:41:34.576900 2523870 logs.go:123] Gathering logs for kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] ...
	I0915 06:41:34.576934 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:34.653698 2523870 logs.go:123] Gathering logs for etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] ...
	I0915 06:41:34.653736 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:34.704486 2523870 logs.go:123] Gathering logs for coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] ...
	I0915 06:41:34.704520 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:34.751429 2523870 logs.go:123] Gathering logs for kubelet ...
	I0915 06:41:34.751460 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0915 06:41:34.804369 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028288    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.804610 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.804777 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.804990 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.805174 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.805399 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:34.842270 2523870 logs.go:123] Gathering logs for dmesg ...
	I0915 06:41:34.842324 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:41:34.861474 2523870 logs.go:123] Gathering logs for kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] ...
	I0915 06:41:34.861505 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:34.906963 2523870 logs.go:123] Gathering logs for container status ...
	I0915 06:41:34.906995 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:41:34.978748 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:34.978778 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0915 06:41:34.978858 2523870 out.go:270] X Problems detected in kubelet:
	W0915 06:41:34.978873 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.978881 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.978887 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.978894 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.979024 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:34.979041 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:34.979048 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:41:44.992518 2523870 system_pods.go:59] 18 kube-system pods found
	I0915 06:41:44.992563 2523870 system_pods.go:61] "coredns-7c65d6cfc9-7vkbz" [6ea47236-17f3-4492-8780-9ad56187f489] Running
	I0915 06:41:44.992570 2523870 system_pods.go:61] "csi-hostpath-attacher-0" [fbcdc315-eaad-4112-a529-eec22f5f7dce] Running
	I0915 06:41:44.992575 2523870 system_pods.go:61] "csi-hostpath-resizer-0" [f5efb463-f551-4dde-87d2-5ec91a566e81] Running
	I0915 06:41:44.992579 2523870 system_pods.go:61] "csi-hostpathplugin-cgcjb" [58bfa35e-116a-45b1-a414-47dadde393c6] Running
	I0915 06:41:44.992583 2523870 system_pods.go:61] "etcd-addons-078133" [b238897b-6598-4d41-915c-57e032f1b6ad] Running
	I0915 06:41:44.992589 2523870 system_pods.go:61] "kindnet-h6zsk" [9c090aa0-3e32-475a-9090-5423f0449354] Running
	I0915 06:41:44.992593 2523870 system_pods.go:61] "kube-apiserver-addons-078133" [9606256f-7a4c-47eb-91e3-29271e631613] Running
	I0915 06:41:44.992597 2523870 system_pods.go:61] "kube-controller-manager-addons-078133" [fa465a0e-97b0-4d5f-af33-a26dbf7e3985] Running
	I0915 06:41:44.992602 2523870 system_pods.go:61] "kube-ingress-dns-minikube" [d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6] Running
	I0915 06:41:44.992637 2523870 system_pods.go:61] "kube-proxy-fjj4k" [be724ff8-b220-4bfb-961c-c6cf462d9ddc] Running
	I0915 06:41:44.992646 2523870 system_pods.go:61] "kube-scheduler-addons-078133" [8a13493f-2796-4a2e-b83b-2f5f8f4f09bb] Running
	I0915 06:41:44.992651 2523870 system_pods.go:61] "metrics-server-84c5f94fbc-gfw99" [8d80d558-0f92-43df-9e1e-035dad596039] Running
	I0915 06:41:44.992655 2523870 system_pods.go:61] "nvidia-device-plugin-daemonset-cwx62" [6bc66e81-1049-45ef-b236-d0ad12ba82cf] Running
	I0915 06:41:44.992658 2523870 system_pods.go:61] "registry-66c9cd494c-dvjjx" [f6332eec-8451-4a18-b1e4-899a9c98a398] Running
	I0915 06:41:44.992662 2523870 system_pods.go:61] "registry-proxy-pph5w" [5bfdb7e0-869e-409d-b185-7e7c0d0386d6] Running
	I0915 06:41:44.992666 2523870 system_pods.go:61] "snapshot-controller-56fcc65765-6lsdb" [40abaaf0-851b-4368-bb6c-c43e5fd96b18] Running
	I0915 06:41:44.992669 2523870 system_pods.go:61] "snapshot-controller-56fcc65765-9dh55" [aac62e95-b572-45ce-ba9b-5b4451c8578b] Running
	I0915 06:41:44.992673 2523870 system_pods.go:61] "storage-provisioner" [30881b3f-dd6b-47c6-8171-db912be01758] Running
	I0915 06:41:44.992680 2523870 system_pods.go:74] duration metric: took 11.166385954s to wait for pod list to return data ...
	I0915 06:41:44.992692 2523870 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:41:44.995239 2523870 default_sa.go:45] found service account: "default"
	I0915 06:41:44.995269 2523870 default_sa.go:55] duration metric: took 2.570121ms for default service account to be created ...
	I0915 06:41:44.995278 2523870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:41:45.005688 2523870 system_pods.go:86] 18 kube-system pods found
	I0915 06:41:45.005731 2523870 system_pods.go:89] "coredns-7c65d6cfc9-7vkbz" [6ea47236-17f3-4492-8780-9ad56187f489] Running
	I0915 06:41:45.005739 2523870 system_pods.go:89] "csi-hostpath-attacher-0" [fbcdc315-eaad-4112-a529-eec22f5f7dce] Running
	I0915 06:41:45.005745 2523870 system_pods.go:89] "csi-hostpath-resizer-0" [f5efb463-f551-4dde-87d2-5ec91a566e81] Running
	I0915 06:41:45.005749 2523870 system_pods.go:89] "csi-hostpathplugin-cgcjb" [58bfa35e-116a-45b1-a414-47dadde393c6] Running
	I0915 06:41:45.005753 2523870 system_pods.go:89] "etcd-addons-078133" [b238897b-6598-4d41-915c-57e032f1b6ad] Running
	I0915 06:41:45.005758 2523870 system_pods.go:89] "kindnet-h6zsk" [9c090aa0-3e32-475a-9090-5423f0449354] Running
	I0915 06:41:45.005762 2523870 system_pods.go:89] "kube-apiserver-addons-078133" [9606256f-7a4c-47eb-91e3-29271e631613] Running
	I0915 06:41:45.005766 2523870 system_pods.go:89] "kube-controller-manager-addons-078133" [fa465a0e-97b0-4d5f-af33-a26dbf7e3985] Running
	I0915 06:41:45.005771 2523870 system_pods.go:89] "kube-ingress-dns-minikube" [d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6] Running
	I0915 06:41:45.005776 2523870 system_pods.go:89] "kube-proxy-fjj4k" [be724ff8-b220-4bfb-961c-c6cf462d9ddc] Running
	I0915 06:41:45.005780 2523870 system_pods.go:89] "kube-scheduler-addons-078133" [8a13493f-2796-4a2e-b83b-2f5f8f4f09bb] Running
	I0915 06:41:45.005785 2523870 system_pods.go:89] "metrics-server-84c5f94fbc-gfw99" [8d80d558-0f92-43df-9e1e-035dad596039] Running
	I0915 06:41:45.005792 2523870 system_pods.go:89] "nvidia-device-plugin-daemonset-cwx62" [6bc66e81-1049-45ef-b236-d0ad12ba82cf] Running
	I0915 06:41:45.005797 2523870 system_pods.go:89] "registry-66c9cd494c-dvjjx" [f6332eec-8451-4a18-b1e4-899a9c98a398] Running
	I0915 06:41:45.005801 2523870 system_pods.go:89] "registry-proxy-pph5w" [5bfdb7e0-869e-409d-b185-7e7c0d0386d6] Running
	I0915 06:41:45.005805 2523870 system_pods.go:89] "snapshot-controller-56fcc65765-6lsdb" [40abaaf0-851b-4368-bb6c-c43e5fd96b18] Running
	I0915 06:41:45.005811 2523870 system_pods.go:89] "snapshot-controller-56fcc65765-9dh55" [aac62e95-b572-45ce-ba9b-5b4451c8578b] Running
	I0915 06:41:45.005815 2523870 system_pods.go:89] "storage-provisioner" [30881b3f-dd6b-47c6-8171-db912be01758] Running
	I0915 06:41:45.005824 2523870 system_pods.go:126] duration metric: took 10.539108ms to wait for k8s-apps to be running ...
	I0915 06:41:45.005833 2523870 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:41:45.005903 2523870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:41:45.040231 2523870 system_svc.go:56] duration metric: took 34.383305ms WaitForService to wait for kubelet
	I0915 06:41:45.041762 2523870 kubeadm.go:582] duration metric: took 2m36.001781462s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:41:45.041984 2523870 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:41:45.049036 2523870 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 06:41:45.055344 2523870 node_conditions.go:123] node cpu capacity is 2
	I0915 06:41:45.061556 2523870 node_conditions.go:105] duration metric: took 17.573916ms to run NodePressure ...
	I0915 06:41:45.061585 2523870 start.go:241] waiting for startup goroutines ...
	I0915 06:41:45.061593 2523870 start.go:246] waiting for cluster config update ...
	I0915 06:41:45.061614 2523870 start.go:255] writing updated cluster config ...
	I0915 06:41:45.061999 2523870 ssh_runner.go:195] Run: rm -f paused
	I0915 06:41:45.465387 2523870 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:41:45.468637 2523870 out.go:177] * Done! kubectl is now configured to use "addons-078133" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 06:54:23 addons-078133 crio[962]: time="2024-09-15 06:54:23.158927689Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 15 06:54:23 addons-078133 crio[962]: time="2024-09-15 06:54:23.181141744Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fb4f000e574ce891e70e60fa5175213d56fcd501f371b1e4434c56566b0e0398/merged/etc/passwd: no such file or directory"
	Sep 15 06:54:23 addons-078133 crio[962]: time="2024-09-15 06:54:23.181331820Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fb4f000e574ce891e70e60fa5175213d56fcd501f371b1e4434c56566b0e0398/merged/etc/group: no such file or directory"
	Sep 15 06:54:23 addons-078133 crio[962]: time="2024-09-15 06:54:23.220787170Z" level=info msg="Created container 970298acdf1fcea46dad132c7fa00cb82b96a354bc728f7c81028d601e810110: default/hello-world-app-55bf9c44b4-prp58/hello-world-app" id=4322b5e6-12f1-46b3-8969-db7308000a83 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 15 06:54:23 addons-078133 crio[962]: time="2024-09-15 06:54:23.221756869Z" level=info msg="Starting container: 970298acdf1fcea46dad132c7fa00cb82b96a354bc728f7c81028d601e810110" id=4a4360a5-435b-449c-8655-5660fdd5ab92 name=/runtime.v1.RuntimeService/StartContainer
	Sep 15 06:54:23 addons-078133 crio[962]: time="2024-09-15 06:54:23.230807233Z" level=info msg="Started container" PID=8760 containerID=970298acdf1fcea46dad132c7fa00cb82b96a354bc728f7c81028d601e810110 description=default/hello-world-app-55bf9c44b4-prp58/hello-world-app id=4a4360a5-435b-449c-8655-5660fdd5ab92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1a7955266a785ff41924757529fa9783c7f5f82a879daaad53b8a47cb53d4a46
	Sep 15 06:54:23 addons-078133 crio[962]: time="2024-09-15 06:54:23.714435641Z" level=info msg="Removing container: 2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc" id=d06e5eac-8d3a-4777-8449-40ecfcd6b2e6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 15 06:54:23 addons-078133 crio[962]: time="2024-09-15 06:54:23.736148359Z" level=info msg="Removed container 2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=d06e5eac-8d3a-4777-8449-40ecfcd6b2e6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 15 06:54:25 addons-078133 crio[962]: time="2024-09-15 06:54:25.457173862Z" level=info msg="Stopping container: 9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2 (timeout: 2s)" id=3ac251b4-ced7-44e2-88b0-8a70a93789fb name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.242651217Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0ddff7c7-667f-439c-bf56-6dbf50085677 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.242944084Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0ddff7c7-667f-439c-bf56-6dbf50085677 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.463157817Z" level=warning msg="Stopping container 9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=3ac251b4-ced7-44e2-88b0-8a70a93789fb name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:54:27 addons-078133 conmon[4703]: conmon 9ddfb8c4ba14f6132e36 <ninfo>: container 4714 exited with status 137
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.608689909Z" level=info msg="Stopped container 9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2: ingress-nginx/ingress-nginx-controller-bc57996ff-xtz9n/controller" id=3ac251b4-ced7-44e2-88b0-8a70a93789fb name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.609437533Z" level=info msg="Stopping pod sandbox: 5078eb39f626b5e99452f17d0aa08dcf722d222080c59a4aaed91a018fa31420" id=f81e2dc5-e413-45d9-8a91-00f196f1c385 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.612941743Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-UIW77T77KQNES5CL - [0:0]\n:KUBE-HP-ZMCNS3RKMDLOPICN - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-ZMCNS3RKMDLOPICN\n-X KUBE-HP-UIW77T77KQNES5CL\nCOMMIT\n"
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.614357413Z" level=info msg="Closing host port tcp:80"
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.614417400Z" level=info msg="Closing host port tcp:443"
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.615775299Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.615809719Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.615977230Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-xtz9n Namespace:ingress-nginx ID:5078eb39f626b5e99452f17d0aa08dcf722d222080c59a4aaed91a018fa31420 UID:80a49e6a-775f-4a72-ae75-261096c46397 NetNS:/var/run/netns/dbfc4f49-6778-4c15-a85f-0a2108007291 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.616120587Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-xtz9n from CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.651073885Z" level=info msg="Stopped pod sandbox: 5078eb39f626b5e99452f17d0aa08dcf722d222080c59a4aaed91a018fa31420" id=f81e2dc5-e413-45d9-8a91-00f196f1c385 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.727276573Z" level=info msg="Removing container: 9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2" id=62314d9c-126a-4504-a5a5-c2d8cf40b1cf name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 15 06:54:27 addons-078133 crio[962]: time="2024-09-15 06:54:27.748337516Z" level=info msg="Removed container 9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2: ingress-nginx/ingress-nginx-controller-bc57996ff-xtz9n/controller" id=62314d9c-126a-4504-a5a5-c2d8cf40b1cf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	970298acdf1fc       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   1a7955266a785       hello-world-app-55bf9c44b4-prp58
	406c2b057a5bb       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                              2 minutes ago       Running             nginx                     0                   5e8cffae4ca3c       nginx
	0827a067b0cde       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 13 minutes ago      Running             gcp-auth                  0                   0dde73874d0cd       gcp-auth-89d5ffd79-dfdjh
	5564eb7326685       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   14 minutes ago      Exited              patch                     0                   aa76c10a86aa6       ingress-nginx-admission-patch-sqnfz
	aa66b6bbbe960       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   14 minutes ago      Exited              create                    0                   189ef42c5e81a       ingress-nginx-admission-create-b57t6
	c1c95dfa2a499       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        14 minutes ago      Running             metrics-server            0                   6b2883d632ffa       metrics-server-84c5f94fbc-gfw99
	d271b7f778ca6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             14 minutes ago      Running             storage-provisioner       0                   e16867b58e664       storage-provisioner
	85daa7360e5e9       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             14 minutes ago      Running             coredns                   0                   9ab5526bc1400       coredns-7c65d6cfc9-7vkbz
	0dd8f2e1d527f       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             15 minutes ago      Running             kindnet-cni               0                   4ab45f1d528e9       kindnet-h6zsk
	7effe62b4c9a3       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             15 minutes ago      Running             kube-proxy                0                   519d37d41f025       kube-proxy-fjj4k
	e96ddc5409269       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             15 minutes ago      Running             kube-apiserver            0                   1b90d84bbc3b0       kube-apiserver-addons-078133
	9b04df1237c35       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             15 minutes ago      Running             kube-scheduler            0                   5bcd311de4186       kube-scheduler-addons-078133
	fc20989b36b93       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             15 minutes ago      Running             kube-controller-manager   0                   37863f70ae7a4       kube-controller-manager-addons-078133
	aa1f1d2a843d0       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             15 minutes ago      Running             etcd                      0                   037f467425e39       etcd-addons-078133
	
	
	==> coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] <==
	[INFO] 10.244.0.7:60956 - 40381 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116937s
	[INFO] 10.244.0.7:45161 - 29366 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002240627s
	[INFO] 10.244.0.7:45161 - 32945 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003202302s
	[INFO] 10.244.0.7:37659 - 38912 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000204787s
	[INFO] 10.244.0.7:37659 - 18694 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141732s
	[INFO] 10.244.0.7:46398 - 25256 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000213993s
	[INFO] 10.244.0.7:46398 - 24995 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00027443s
	[INFO] 10.244.0.7:47479 - 52991 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072909s
	[INFO] 10.244.0.7:47479 - 46333 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005142s
	[INFO] 10.244.0.7:49213 - 1338 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005339s
	[INFO] 10.244.0.7:49213 - 49467 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072876s
	[INFO] 10.244.0.7:42802 - 41891 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00141695s
	[INFO] 10.244.0.7:42802 - 39841 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001484666s
	[INFO] 10.244.0.7:38900 - 44116 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066592s
	[INFO] 10.244.0.7:38900 - 30299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116486s
	[INFO] 10.244.0.19:47931 - 25633 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002470447s
	[INFO] 10.244.0.19:33148 - 45348 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002478143s
	[INFO] 10.244.0.19:56417 - 22070 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147508s
	[INFO] 10.244.0.19:50454 - 60030 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133371s
	[INFO] 10.244.0.19:42936 - 16948 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128678s
	[INFO] 10.244.0.19:52660 - 34977 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125519s
	[INFO] 10.244.0.19:59020 - 55342 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003112933s
	[INFO] 10.244.0.19:49810 - 53119 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003366441s
	[INFO] 10.244.0.19:56751 - 42495 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005208407s
	[INFO] 10.244.0.19:42362 - 42298 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.005481853s
	
	
	==> describe nodes <==
	Name:               addons-078133
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-078133
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-078133
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_39_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-078133
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:39:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-078133
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:54:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:52:11 +0000   Sun, 15 Sep 2024 06:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:52:11 +0000   Sun, 15 Sep 2024 06:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:52:11 +0000   Sun, 15 Sep 2024 06:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:52:11 +0000   Sun, 15 Sep 2024 06:39:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-078133
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd8b84dea15e4d35b14dc406bd3d7d26
	  System UUID:                a2ace0dd-aa7e-4476-816d-37514df39de9
	  Boot ID:                    86c781ec-01d2-4efb-aba1-a43f302ac663
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-prp58         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  gcp-auth                    gcp-auth-89d5ffd79-dfdjh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-7vkbz                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-addons-078133                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-h6zsk                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-addons-078133             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-078133    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-fjj4k                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-078133             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-gfw99          100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-078133 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-078133 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-078133 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node addons-078133 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node addons-078133 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m                kubelet          Node addons-078133 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node addons-078133 event: Registered Node addons-078133 in Controller
	  Normal   NodeReady                14m                kubelet          Node addons-078133 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep15 05:34] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000091 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001089] FS-Cache: O-cookie d=000000009ec4a1b9{9P.session} n=00000000933e989b
	[  +0.001105] FS-Cache: O-key=[10] '34333036383438313233'
	[  +0.000796] FS-Cache: N-cookie c=00000092 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000965] FS-Cache: N-cookie d=000000009ec4a1b9{9P.session} n=00000000c50af53f
	[  +0.001363] FS-Cache: N-key=[10] '34333036383438313233'
	[Sep15 06:08] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] <==
	{"level":"info","ts":"2024-09-15T06:38:58.060337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-15T06:38:58.060369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T06:38:58.065025Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-078133 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:38:58.065273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:38:58.065678Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.068367Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:38:58.068608Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:38:58.068687Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:38:58.069414Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:38:58.070446Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-15T06:38:58.073106Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.073273Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.088962Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.089741Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:38:58.090677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:39:10.078651Z","caller":"traceutil/trace.go:171","msg":"trace[978204264] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"138.849688ms","start":"2024-09-15T06:39:09.939783Z","end":"2024-09-15T06:39:10.078632Z","steps":["trace[978204264] 'process raft request'  (duration: 95.382705ms)","trace[978204264] 'compare'  (duration: 42.981654ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:39:13.438537Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.182536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:39:13.438634Z","caller":"traceutil/trace.go:171","msg":"trace[1902515032] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:440; }","duration":"112.30017ms","start":"2024-09-15T06:39:13.326320Z","end":"2024-09-15T06:39:13.438620Z","steps":["trace[1902515032] 'agreement among raft nodes before linearized reading'  (duration: 83.629989ms)","trace[1902515032] 'range keys from in-memory index tree'  (duration: 28.533716ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:39:51.757080Z","caller":"traceutil/trace.go:171","msg":"trace[1907155975] transaction","detail":"{read_only:false; response_revision:896; number_of_response:1; }","duration":"103.53271ms","start":"2024-09-15T06:39:51.653528Z","end":"2024-09-15T06:39:51.757061Z","steps":["trace[1907155975] 'process raft request'  (duration: 79.5189ms)","trace[1907155975] 'compare'  (duration: 23.406243ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:48:58.204333Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1530}
	{"level":"info","ts":"2024-09-15T06:48:58.238285Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1530,"took":"33.495045ms","hash":3104697584,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3293184,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-15T06:48:58.238443Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3104697584,"revision":1530,"compact-revision":-1}
	{"level":"info","ts":"2024-09-15T06:53:58.209201Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1948}
	{"level":"info","ts":"2024-09-15T06:53:58.225434Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1948,"took":"15.604764ms","hash":4226942108,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4395008,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-15T06:53:58.225557Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4226942108,"revision":1948,"compact-revision":1530}
	
	
	==> gcp-auth [0827a067b0cde94dfdfe774133d38b55169c16cd00de8fa5c926fac9c7c30441] <==
	2024/09/15 06:41:45 Ready to write response ...
	2024/09/15 06:41:46 Ready to marshal response ...
	2024/09/15 06:41:46 Ready to write response ...
	2024/09/15 06:49:53 Ready to marshal response ...
	2024/09/15 06:49:53 Ready to write response ...
	2024/09/15 06:50:00 Ready to marshal response ...
	2024/09/15 06:50:00 Ready to write response ...
	2024/09/15 06:50:20 Ready to marshal response ...
	2024/09/15 06:50:20 Ready to write response ...
	2024/09/15 06:50:54 Ready to marshal response ...
	2024/09/15 06:50:54 Ready to write response ...
	2024/09/15 06:50:55 Ready to marshal response ...
	2024/09/15 06:50:55 Ready to write response ...
	2024/09/15 06:51:03 Ready to marshal response ...
	2024/09/15 06:51:03 Ready to write response ...
	2024/09/15 06:51:11 Ready to marshal response ...
	2024/09/15 06:51:11 Ready to write response ...
	2024/09/15 06:51:11 Ready to marshal response ...
	2024/09/15 06:51:11 Ready to write response ...
	2024/09/15 06:51:11 Ready to marshal response ...
	2024/09/15 06:51:11 Ready to write response ...
	2024/09/15 06:52:00 Ready to marshal response ...
	2024/09/15 06:52:00 Ready to write response ...
	2024/09/15 06:54:21 Ready to marshal response ...
	2024/09/15 06:54:21 Ready to write response ...
	
	
	==> kernel <==
	 06:54:33 up 14:37,  0 users,  load average: 0.77, 0.53, 1.20
	Linux addons-078133 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] <==
	I0915 06:52:30.843028       1 main.go:299] handling current node
	I0915 06:52:40.836565       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:52:40.836600       1 main.go:299] handling current node
	I0915 06:52:50.837444       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:52:50.837481       1 main.go:299] handling current node
	I0915 06:53:00.838028       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:53:00.838183       1 main.go:299] handling current node
	I0915 06:53:10.837066       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:53:10.837099       1 main.go:299] handling current node
	I0915 06:53:20.843272       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:53:20.843308       1 main.go:299] handling current node
	I0915 06:53:30.838300       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:53:30.838486       1 main.go:299] handling current node
	I0915 06:53:40.836618       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:53:40.836733       1 main.go:299] handling current node
	I0915 06:53:50.837115       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:53:50.837150       1 main.go:299] handling current node
	I0915 06:54:00.845087       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:54:00.845137       1 main.go:299] handling current node
	I0915 06:54:10.836981       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:54:10.837018       1 main.go:299] handling current node
	I0915 06:54:20.837061       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:54:20.837098       1 main.go:299] handling current node
	I0915 06:54:30.837040       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:54:30.837073       1 main.go:299] handling current node
	
	
	==> kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] <==
	E0915 06:50:29.246052       1 watch.go:250] "Unhandled Error" err="write tcp 192.168.49.2:8443->10.244.0.13:46336: write: connection reset by peer" logger="UnhandledError"
	I0915 06:50:35.680485       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.680547       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.774314       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.774371       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.811502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.811566       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.819471       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.820168       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.950749       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.950798       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:50:36.819999       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:50:36.951215       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0915 06:50:36.956283       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0915 06:51:05.884183       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:51:05.894579       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:51:05.905669       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0915 06:51:11.606105       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.7.251"}
	E0915 06:51:20.905697       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0915 06:51:54.780427       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 06:51:55.812908       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0915 06:52:00.747011       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0915 06:52:01.077622       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.12.1"}
	I0915 06:54:22.043767       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.230.72"}
	E0915 06:54:24.513992       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] <==
	W0915 06:53:03.960327       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:53:03.960462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:53:12.996726       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:53:12.996769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:53:31.740039       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:53:31.740085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:53:38.617990       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:53:38.618034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:53:39.167265       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:53:39.167310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:54:01.644819       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:54:01.644874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:54:04.483680       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:54:04.483726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:54:13.197217       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:54:13.197263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:54:21.826082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="53.96826ms"
	I0915 06:54:21.837074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.705487ms"
	I0915 06:54:21.837244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.517µs"
	I0915 06:54:21.837321       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="19.003µs"
	I0915 06:54:23.783321       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="25.520855ms"
	I0915 06:54:23.783789       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="67.116µs"
	I0915 06:54:24.422676       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0915 06:54:24.426274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="7.532µs"
	I0915 06:54:24.431767       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	
	
	==> kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] <==
	I0915 06:39:13.431040       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:39:14.654548       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:39:14.654733       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:39:14.806709       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:39:14.806853       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:39:14.809136       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:39:14.809744       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:39:14.809813       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:39:14.834509       1 config.go:199] "Starting service config controller"
	I0915 06:39:14.847771       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:39:14.854180       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:39:14.881895       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:39:14.861657       1 config.go:328] "Starting node config controller"
	I0915 06:39:14.882892       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:39:14.982166       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:39:14.985602       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:39:14.987423       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] <==
	W0915 06:39:02.337994       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0915 06:39:02.338097       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:39:02.340793       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 06:39:02.338171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:39:02.340988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 06:39:02.341068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:39:02.341150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 06:39:02.341224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:39:02.341315       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:39:02.341387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:39:02.341464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:39:02.341546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:39:02.341632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 06:39:02.341721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0915 06:39:02.339535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0915 06:39:03.627072       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:54:23 addons-078133 kubelet[1502]: I0915 06:54:23.175000    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9j8l\" (UniqueName: \"kubernetes.io/projected/d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6-kube-api-access-p9j8l\") pod \"d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6\" (UID: \"d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6\") "
	Sep 15 06:54:23 addons-078133 kubelet[1502]: I0915 06:54:23.182525    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6-kube-api-access-p9j8l" (OuterVolumeSpecName: "kube-api-access-p9j8l") pod "d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6" (UID: "d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6"). InnerVolumeSpecName "kube-api-access-p9j8l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:54:23 addons-078133 kubelet[1502]: I0915 06:54:23.275642    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-p9j8l\" (UniqueName: \"kubernetes.io/projected/d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6-kube-api-access-p9j8l\") on node \"addons-078133\" DevicePath \"\""
	Sep 15 06:54:23 addons-078133 kubelet[1502]: I0915 06:54:23.712799    1502 scope.go:117] "RemoveContainer" containerID="2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc"
	Sep 15 06:54:23 addons-078133 kubelet[1502]: I0915 06:54:23.736466    1502 scope.go:117] "RemoveContainer" containerID="2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc"
	Sep 15 06:54:23 addons-078133 kubelet[1502]: E0915 06:54:23.737052    1502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc\": container with ID starting with 2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc not found: ID does not exist" containerID="2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc"
	Sep 15 06:54:23 addons-078133 kubelet[1502]: I0915 06:54:23.737095    1502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc"} err="failed to get container status \"2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc\": rpc error: code = NotFound desc = could not find container \"2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc\": container with ID starting with 2246ddeb20532fa91ba252c33d8915b029b8dfa084dbe2efd7706fc069eed4fc not found: ID does not exist"
	Sep 15 06:54:24 addons-078133 kubelet[1502]: I0915 06:54:24.243579    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6" path="/var/lib/kubelet/pods/d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6/volumes"
	Sep 15 06:54:24 addons-078133 kubelet[1502]: I0915 06:54:24.444093    1502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-prp58" podStartSLOduration=2.474014983 podStartE2EDuration="3.44407603s" podCreationTimestamp="2024-09-15 06:54:21 +0000 UTC" firstStartedPulling="2024-09-15 06:54:22.186789638 +0000 UTC m=+918.127612969" lastFinishedPulling="2024-09-15 06:54:23.156850685 +0000 UTC m=+919.097674016" observedRunningTime="2024-09-15 06:54:23.756228944 +0000 UTC m=+919.697052275" watchObservedRunningTime="2024-09-15 06:54:24.44407603 +0000 UTC m=+920.384899361"
	Sep 15 06:54:24 addons-078133 kubelet[1502]: E0915 06:54:24.605947    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383264605665002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:54:24 addons-078133 kubelet[1502]: E0915 06:54:24.605998    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383264605665002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:54:26 addons-078133 kubelet[1502]: I0915 06:54:26.243565    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0fec42-72e2-4b44-95a0-31d53928eec4" path="/var/lib/kubelet/pods/0c0fec42-72e2-4b44-95a0-31d53928eec4/volumes"
	Sep 15 06:54:26 addons-078133 kubelet[1502]: I0915 06:54:26.244012    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62119138-2121-4f82-a168-1195e4dc025d" path="/var/lib/kubelet/pods/62119138-2121-4f82-a168-1195e4dc025d/volumes"
	Sep 15 06:54:27 addons-078133 kubelet[1502]: E0915 06:54:27.243197    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="118abc58-e4e4-4fbe-a031-20b040e86f27"
	Sep 15 06:54:27 addons-078133 kubelet[1502]: I0915 06:54:27.725939    1502 scope.go:117] "RemoveContainer" containerID="9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2"
	Sep 15 06:54:27 addons-078133 kubelet[1502]: I0915 06:54:27.748617    1502 scope.go:117] "RemoveContainer" containerID="9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2"
	Sep 15 06:54:27 addons-078133 kubelet[1502]: E0915 06:54:27.749103    1502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2\": container with ID starting with 9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2 not found: ID does not exist" containerID="9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2"
	Sep 15 06:54:27 addons-078133 kubelet[1502]: I0915 06:54:27.749145    1502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2"} err="failed to get container status \"9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2\": rpc error: code = NotFound desc = could not find container \"9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2\": container with ID starting with 9ddfb8c4ba14f6132e36022828cf44ff21ca0ed6f8a833fd73008bd878025ba2 not found: ID does not exist"
	Sep 15 06:54:27 addons-078133 kubelet[1502]: I0915 06:54:27.810815    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80a49e6a-775f-4a72-ae75-261096c46397-webhook-cert\") pod \"80a49e6a-775f-4a72-ae75-261096c46397\" (UID: \"80a49e6a-775f-4a72-ae75-261096c46397\") "
	Sep 15 06:54:27 addons-078133 kubelet[1502]: I0915 06:54:27.810873    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfxs8\" (UniqueName: \"kubernetes.io/projected/80a49e6a-775f-4a72-ae75-261096c46397-kube-api-access-cfxs8\") pod \"80a49e6a-775f-4a72-ae75-261096c46397\" (UID: \"80a49e6a-775f-4a72-ae75-261096c46397\") "
	Sep 15 06:54:27 addons-078133 kubelet[1502]: I0915 06:54:27.816994    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a49e6a-775f-4a72-ae75-261096c46397-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "80a49e6a-775f-4a72-ae75-261096c46397" (UID: "80a49e6a-775f-4a72-ae75-261096c46397"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 15 06:54:27 addons-078133 kubelet[1502]: I0915 06:54:27.817541    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a49e6a-775f-4a72-ae75-261096c46397-kube-api-access-cfxs8" (OuterVolumeSpecName: "kube-api-access-cfxs8") pod "80a49e6a-775f-4a72-ae75-261096c46397" (UID: "80a49e6a-775f-4a72-ae75-261096c46397"). InnerVolumeSpecName "kube-api-access-cfxs8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:54:27 addons-078133 kubelet[1502]: I0915 06:54:27.912168    1502 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80a49e6a-775f-4a72-ae75-261096c46397-webhook-cert\") on node \"addons-078133\" DevicePath \"\""
	Sep 15 06:54:27 addons-078133 kubelet[1502]: I0915 06:54:27.912209    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cfxs8\" (UniqueName: \"kubernetes.io/projected/80a49e6a-775f-4a72-ae75-261096c46397-kube-api-access-cfxs8\") on node \"addons-078133\" DevicePath \"\""
	Sep 15 06:54:28 addons-078133 kubelet[1502]: I0915 06:54:28.242975    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a49e6a-775f-4a72-ae75-261096c46397" path="/var/lib/kubelet/pods/80a49e6a-775f-4a72-ae75-261096c46397/volumes"
	
	
	==> storage-provisioner [d271b7f778ca6a5e43c6790e874afaf722384211e819eedb0f87091dcf8bb3ca] <==
	I0915 06:39:51.876457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:39:52.092367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:39:52.122251       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:39:52.141776       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:39:52.142096       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-078133_b714d925-ab44-41be-bcf1-c4695a08fcc2!
	I0915 06:39:52.143415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c1414a91-3bba-456a-9087-6984d4f1a1e5", APIVersion:"v1", ResourceVersion:"932", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-078133_b714d925-ab44-41be-bcf1-c4695a08fcc2 became leader
	I0915 06:39:52.243076       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-078133_b714d925-ab44-41be-bcf1-c4695a08fcc2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-078133 -n addons-078133
helpers_test.go:261: (dbg) Run:  kubectl --context addons-078133 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-078133 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-078133 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-078133/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:41:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x9nfs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x9nfs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  12m                   default-scheduler  Successfully assigned default/busybox to addons-078133
	  Normal   Pulling    11m (x4 over 12m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 12m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 12m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 12m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m44s (x42 over 12m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (357.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.691242ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-gfw99" [8d80d558-0f92-43df-9e1e-035dad596039] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004335593s
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (100.00402ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 12m25.546873406s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (101.26927ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 12m29.51249524s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (90.357371ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 12m35.982837237s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (89.888115ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 12m44.605959498s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (97.570209ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 12m55.776370477s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (91.542425ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 13m11.487746968s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (91.033285ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 13m34.752994585s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (94.132976ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 13m59.628607104s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (97.732363ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 15m6.005535014s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (83.647594ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 15m41.463499398s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (89.917504ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 17m0.367837108s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-078133 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-078133 top pods -n kube-system: exit status 1 (91.654044ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7vkbz, age: 18m12.530003134s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-078133
helpers_test.go:235: (dbg) docker inspect addons-078133:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde",
	        "Created": "2024-09-15T06:38:37.750228282Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2524440,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:38:37.907510174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1b71fa87733590eb4674b16f6945626ae533f3af37066893e3fd70eb9476268",
	        "ResolvConfPath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/hostname",
	        "HostsPath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/hosts",
	        "LogPath": "/var/lib/docker/containers/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde/7434fa99399a28396035634456c789f18e60db4571749c583420a20b0f890bde-json.log",
	        "Name": "/addons-078133",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-078133:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-078133",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420-init/diff:/var/lib/docker/overlay2/72792481ba3fe11d67c9c5bebed6121eb09dffa903ddf816dfb06e703f2d9d5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2972f1579d051820707a303e3a093e25713a29540c7aa76655f15ed7472a420/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-078133",
	                "Source": "/var/lib/docker/volumes/addons-078133/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-078133",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-078133",
	                "name.minikube.sigs.k8s.io": "addons-078133",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c8d7e1050dbe4977f54b06c2224002186fb12e89f8d90b585337ed8c180c6bd",
	            "SandboxKey": "/var/run/docker/netns/0c8d7e1050db",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35748"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35749"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35752"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35750"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35751"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-078133": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "61892ade19da7989ac86d074df0c7f6076bb69e05029d3382c7c93eab898c4ab",
	                    "EndpointID": "5578870202f5d628a4be39c5ca56e5901d1922ca753b45b5f33733d1f214df65",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-078133",
	                        "7434fa99399a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-078133 -n addons-078133
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-078133 logs -n 25: (2.277858172s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-600407                                                                     | download-only-600407   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| start   | --download-only -p                                                                          | download-docker-842211 | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | download-docker-842211                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-842211                                                                   | download-docker-842211 | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-404653   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | binary-mirror-404653                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33149                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-404653                                                                     | binary-mirror-404653   | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| addons  | enable dashboard -p                                                                         | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | addons-078133                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | addons-078133                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-078133 --wait=true                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-078133 addons                                                                        | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-078133 addons                                                                        | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:50 UTC | 15 Sep 24 06:50 UTC |
	|         | -p addons-078133                                                                            |                        |         |         |                     |                     |
	| ip      | addons-078133 ip                                                                            | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-078133 ssh cat                                                                       | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | /opt/local-path-provisioner/pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | addons-078133                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | -p addons-078133                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:52 UTC |
	|         | addons-078133                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-078133 ssh curl -s                                                                   | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:52 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-078133 ip                                                                            | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-078133 addons disable                                                                | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-078133 addons                                                                        | addons-078133          | jenkins | v1.34.0 | 15 Sep 24 06:57 UTC | 15 Sep 24 06:57 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:38:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:38:12.787229 2523870 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:38:12.787649 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:38:12.787663 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:38:12.787669 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:38:12.787948 2523870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 06:38:12.788417 2523870 out.go:352] Setting JSON to false
	I0915 06:38:12.789322 2523870 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":51644,"bootTime":1726330649,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0915 06:38:12.789406 2523870 start.go:139] virtualization:  
	I0915 06:38:12.792757 2523870 out.go:177] * [addons-078133] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 06:38:12.795650 2523870 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:38:12.795696 2523870 notify.go:220] Checking for updates...
	I0915 06:38:12.799075 2523870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:38:12.801817 2523870 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 06:38:12.804477 2523870 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	I0915 06:38:12.807247 2523870 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 06:38:12.809885 2523870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:38:12.812844 2523870 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:38:12.839036 2523870 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:38:12.839177 2523870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:38:12.891358 2523870 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:38:12.881981504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:38:12.891480 2523870 docker.go:318] overlay module found
	I0915 06:38:12.895859 2523870 out.go:177] * Using the docker driver based on user configuration
	I0915 06:38:12.898575 2523870 start.go:297] selected driver: docker
	I0915 06:38:12.898603 2523870 start.go:901] validating driver "docker" against <nil>
	I0915 06:38:12.898625 2523870 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:38:12.899275 2523870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:38:12.952158 2523870 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:38:12.942889904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:38:12.952417 2523870 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:38:12.952666 2523870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:38:12.955396 2523870 out.go:177] * Using Docker driver with root privileges
	I0915 06:38:12.957978 2523870 cni.go:84] Creating CNI manager for ""
	I0915 06:38:12.958053 2523870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:38:12.958067 2523870 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 06:38:12.958154 2523870 start.go:340] cluster config:
	{Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:38:12.961074 2523870 out.go:177] * Starting "addons-078133" primary control-plane node in "addons-078133" cluster
	I0915 06:38:12.963705 2523870 cache.go:121] Beginning downloading kic base image for docker with crio
	I0915 06:38:12.966437 2523870 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:38:12.969038 2523870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:38:12.969094 2523870 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0915 06:38:12.969106 2523870 cache.go:56] Caching tarball of preloaded images
	I0915 06:38:12.969131 2523870 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:38:12.969194 2523870 preload.go:172] Found /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0915 06:38:12.969204 2523870 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 06:38:12.969614 2523870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/config.json ...
	I0915 06:38:12.969647 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/config.json: {Name:mkd56c679d1e8eeb25c48c5bb5d09233f14404e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:12.984555 2523870 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:38:12.984708 2523870 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:38:12.984732 2523870 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:38:12.984740 2523870 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:38:12.984748 2523870 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:38:12.984758 2523870 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:38:30.356936 2523870 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:38:30.356980 2523870 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:38:30.357009 2523870 start.go:360] acquireMachinesLock for addons-078133: {Name:mkd22383cf6e30905104727dd6882efae296baf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:38:30.357138 2523870 start.go:364] duration metric: took 107.583µs to acquireMachinesLock for "addons-078133"
	I0915 06:38:30.357171 2523870 start.go:93] Provisioning new machine with config: &{Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:38:30.357256 2523870 start.go:125] createHost starting for "" (driver="docker")
	I0915 06:38:30.358886 2523870 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 06:38:30.359147 2523870 start.go:159] libmachine.API.Create for "addons-078133" (driver="docker")
	I0915 06:38:30.359182 2523870 client.go:168] LocalClient.Create starting
	I0915 06:38:30.359309 2523870 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem
	I0915 06:38:31.028935 2523870 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem
	I0915 06:38:31.157412 2523870 cli_runner.go:164] Run: docker network inspect addons-078133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 06:38:31.173542 2523870 cli_runner.go:211] docker network inspect addons-078133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 06:38:31.173630 2523870 network_create.go:284] running [docker network inspect addons-078133] to gather additional debugging logs...
	I0915 06:38:31.173652 2523870 cli_runner.go:164] Run: docker network inspect addons-078133
	W0915 06:38:31.189395 2523870 cli_runner.go:211] docker network inspect addons-078133 returned with exit code 1
	I0915 06:38:31.189428 2523870 network_create.go:287] error running [docker network inspect addons-078133]: docker network inspect addons-078133: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-078133 not found
	I0915 06:38:31.189442 2523870 network_create.go:289] output of [docker network inspect addons-078133]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-078133 not found
	
	** /stderr **
	I0915 06:38:31.189539 2523870 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:38:31.205841 2523870 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001792940}
	I0915 06:38:31.205885 2523870 network_create.go:124] attempt to create docker network addons-078133 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 06:38:31.205944 2523870 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-078133 addons-078133
	I0915 06:38:31.304079 2523870 network_create.go:108] docker network addons-078133 192.168.49.0/24 created
	I0915 06:38:31.304113 2523870 kic.go:121] calculated static IP "192.168.49.2" for the "addons-078133" container
	I0915 06:38:31.304203 2523870 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 06:38:31.322468 2523870 cli_runner.go:164] Run: docker volume create addons-078133 --label name.minikube.sigs.k8s.io=addons-078133 --label created_by.minikube.sigs.k8s.io=true
	I0915 06:38:31.345040 2523870 oci.go:103] Successfully created a docker volume addons-078133
	I0915 06:38:31.345137 2523870 cli_runner.go:164] Run: docker run --rm --name addons-078133-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-078133 --entrypoint /usr/bin/test -v addons-078133:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 06:38:33.575685 2523870 cli_runner.go:217] Completed: docker run --rm --name addons-078133-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-078133 --entrypoint /usr/bin/test -v addons-078133:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (2.230494087s)
	I0915 06:38:33.575720 2523870 oci.go:107] Successfully prepared a docker volume addons-078133
	I0915 06:38:33.575744 2523870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:38:33.575763 2523870 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 06:38:33.575830 2523870 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-078133:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 06:38:37.682758 2523870 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-078133:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.10688552s)
	I0915 06:38:37.682789 2523870 kic.go:203] duration metric: took 4.107023149s to extract preloaded images to volume ...
	W0915 06:38:37.682941 2523870 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0915 06:38:37.683057 2523870 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 06:38:37.735978 2523870 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-078133 --name addons-078133 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-078133 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-078133 --network addons-078133 --ip 192.168.49.2 --volume addons-078133:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 06:38:38.073869 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Running}}
	I0915 06:38:38.096611 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:38:38.117014 2523870 cli_runner.go:164] Run: docker exec addons-078133 stat /var/lib/dpkg/alternatives/iptables
	I0915 06:38:38.193401 2523870 oci.go:144] the created container "addons-078133" has a running status.
	I0915 06:38:38.193429 2523870 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa...
	I0915 06:38:40.103212 2523870 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 06:38:40.124321 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:38:40.145609 2523870 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 06:38:40.145635 2523870 kic_runner.go:114] Args: [docker exec --privileged addons-078133 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 06:38:40.201133 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:38:40.223083 2523870 machine.go:93] provisionDockerMachine start ...
	I0915 06:38:40.223185 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:40.248426 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:40.248710 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:40.248727 2523870 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:38:40.384623 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-078133
	
	I0915 06:38:40.384649 2523870 ubuntu.go:169] provisioning hostname "addons-078133"
	I0915 06:38:40.384719 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:40.402539 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:40.402807 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:40.402827 2523870 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-078133 && echo "addons-078133" | sudo tee /etc/hostname
	I0915 06:38:40.553443 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-078133
	
	I0915 06:38:40.553586 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:40.571125 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:40.571387 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:40.571403 2523870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-078133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-078133/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-078133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:38:40.709939 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:38:40.709969 2523870 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-2517725/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-2517725/.minikube}
	I0915 06:38:40.710052 2523870 ubuntu.go:177] setting up certificates
	I0915 06:38:40.710065 2523870 provision.go:84] configureAuth start
	I0915 06:38:40.710167 2523870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-078133
	I0915 06:38:40.728157 2523870 provision.go:143] copyHostCerts
	I0915 06:38:40.728258 2523870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem (1082 bytes)
	I0915 06:38:40.728439 2523870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem (1123 bytes)
	I0915 06:38:40.728531 2523870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem (1675 bytes)
	I0915 06:38:40.728606 2523870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem org=jenkins.addons-078133 san=[127.0.0.1 192.168.49.2 addons-078133 localhost minikube]
	I0915 06:38:42.353273 2523870 provision.go:177] copyRemoteCerts
	I0915 06:38:42.353353 2523870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:38:42.353400 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.373293 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:42.471278 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 06:38:42.497795 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:38:42.522600 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 06:38:42.547736 2523870 provision.go:87] duration metric: took 1.83765139s to configureAuth
	I0915 06:38:42.547820 2523870 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:38:42.548046 2523870 config.go:182] Loaded profile config "addons-078133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:38:42.548166 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.565534 2523870 main.go:141] libmachine: Using SSH client type: native
	I0915 06:38:42.565797 2523870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35748 <nil> <nil>}
	I0915 06:38:42.565821 2523870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 06:38:42.807672 2523870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 06:38:42.807751 2523870 machine.go:96] duration metric: took 2.584641806s to provisionDockerMachine
	I0915 06:38:42.807788 2523870 client.go:171] duration metric: took 12.44858555s to LocalClient.Create
	I0915 06:38:42.807845 2523870 start.go:167] duration metric: took 12.448698434s to libmachine.API.Create "addons-078133"
	I0915 06:38:42.807872 2523870 start.go:293] postStartSetup for "addons-078133" (driver="docker")
	I0915 06:38:42.807911 2523870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:38:42.808014 2523870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:38:42.808114 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.826066 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:42.926144 2523870 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:38:42.930078 2523870 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:38:42.930114 2523870 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:38:42.930124 2523870 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:38:42.930131 2523870 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:38:42.930144 2523870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/addons for local assets ...
	I0915 06:38:42.930220 2523870 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/files for local assets ...
	I0915 06:38:42.930252 2523870 start.go:296] duration metric: took 122.36099ms for postStartSetup
	I0915 06:38:42.930585 2523870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-078133
	I0915 06:38:42.948043 2523870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/config.json ...
	I0915 06:38:42.948387 2523870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:38:42.948443 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:42.965578 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:43.062057 2523870 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:38:43.066878 2523870 start.go:128] duration metric: took 12.709604826s to createHost
	I0915 06:38:43.066945 2523870 start.go:83] releasing machines lock for "addons-078133", held for 12.709793154s
	I0915 06:38:43.067058 2523870 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-078133
	I0915 06:38:43.084231 2523870 ssh_runner.go:195] Run: cat /version.json
	I0915 06:38:43.084291 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:43.084556 2523870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:38:43.084638 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:38:43.110679 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:43.113521 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:38:43.205306 2523870 ssh_runner.go:195] Run: systemctl --version
	I0915 06:38:43.331819 2523870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 06:38:43.475451 2523870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:38:43.479654 2523870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:38:43.503032 2523870 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0915 06:38:43.503135 2523870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:38:43.549259 2523870 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0915 06:38:43.549327 2523870 start.go:495] detecting cgroup driver to use...
	I0915 06:38:43.549376 2523870 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:38:43.549460 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 06:38:43.568882 2523870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 06:38:43.581182 2523870 docker.go:217] disabling cri-docker service (if available) ...
	I0915 06:38:43.581292 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 06:38:43.595995 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 06:38:43.611893 2523870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 06:38:43.708103 2523870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 06:38:43.812378 2523870 docker.go:233] disabling docker service ...
	I0915 06:38:43.812466 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 06:38:43.833320 2523870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 06:38:43.845521 2523870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 06:38:43.943839 2523870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 06:38:44.039910 2523870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 06:38:44.052271 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:38:44.069425 2523870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 06:38:44.069497 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.079718 2523870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 06:38:44.079845 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.090489 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.100780 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.111161 2523870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:38:44.120858 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.131104 2523870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.148858 2523870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:38:44.159069 2523870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:38:44.168402 2523870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:38:44.177003 2523870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:38:44.265072 2523870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 06:38:44.374011 2523870 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 06:38:44.374133 2523870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 06:38:44.378540 2523870 start.go:563] Will wait 60s for crictl version
	I0915 06:38:44.378656 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:38:44.382546 2523870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:38:44.424234 2523870 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0915 06:38:44.424349 2523870 ssh_runner.go:195] Run: crio --version
	I0915 06:38:44.475232 2523870 ssh_runner.go:195] Run: crio --version
	I0915 06:38:44.519124 2523870 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0915 06:38:44.521747 2523870 cli_runner.go:164] Run: docker network inspect addons-078133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:38:44.537582 2523870 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 06:38:44.541419 2523870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:38:44.552857 2523870 kubeadm.go:883] updating cluster {Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:38:44.552984 2523870 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:38:44.553046 2523870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:38:44.633055 2523870 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:38:44.633083 2523870 crio.go:433] Images already preloaded, skipping extraction
	I0915 06:38:44.633143 2523870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:38:44.673366 2523870 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:38:44.673388 2523870 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:38:44.673397 2523870 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0915 06:38:44.673491 2523870 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-078133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:38:44.673581 2523870 ssh_runner.go:195] Run: crio config
	I0915 06:38:44.732765 2523870 cni.go:84] Creating CNI manager for ""
	I0915 06:38:44.732858 2523870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:38:44.732877 2523870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:38:44.732902 2523870 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-078133 NodeName:addons-078133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:38:44.733049 2523870 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-078133"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:38:44.733130 2523870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:38:44.741946 2523870 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:38:44.742045 2523870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:38:44.750784 2523870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0915 06:38:44.770200 2523870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:38:44.789649 2523870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0915 06:38:44.808669 2523870 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:38:44.812327 2523870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:38:44.823008 2523870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:38:44.913291 2523870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:38:44.927747 2523870 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133 for IP: 192.168.49.2
	I0915 06:38:44.927778 2523870 certs.go:194] generating shared ca certs ...
	I0915 06:38:44.927795 2523870 certs.go:226] acquiring lock for ca certs: {Name:mk5e6b4b1562ab546f1aa57699f236200f49d7e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:44.928715 2523870 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key
	I0915 06:38:45.326164 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt ...
	I0915 06:38:45.326211 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt: {Name:mk5bc462617f9659ba52a2152c2f6ee2c4afd336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.326491 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key ...
	I0915 06:38:45.326511 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key: {Name:mke6fb53bd94c120122c79adc8bb1635818a4c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.326662 2523870 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key
	I0915 06:38:45.743346 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt ...
	I0915 06:38:45.743380 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt: {Name:mk061dad5fc3f04b4c5728856758e4e719a722f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.743581 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key ...
	I0915 06:38:45.743595 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key: {Name:mk8f4151cf3bb4e60b32b8767dc2cf5cf44a4505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:45.743681 2523870 certs.go:256] generating profile certs ...
	I0915 06:38:45.743744 2523870 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.key
	I0915 06:38:45.743762 2523870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt with IP's: []
	I0915 06:38:46.183135 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt ...
	I0915 06:38:46.183178 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: {Name:mkf0bebdecf567120b50e3d4771ed97fb5f77b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:46.184171 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.key ...
	I0915 06:38:46.184189 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.key: {Name:mkae22a5721ba63055014519e5295d510f1c607b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:46.184290 2523870 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b
	I0915 06:38:46.184313 2523870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 06:38:47.375989 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b ...
	I0915 06:38:47.376029 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b: {Name:mkbb0cbab611271bcaa81d025cb58e0f49d6b725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:47.376266 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b ...
	I0915 06:38:47.376282 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b: {Name:mk44cadca365ce4b4475fd5ecbd0d3a7ab4a5e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:47.376377 2523870 certs.go:381] copying /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt.406aa73b -> /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt
	I0915 06:38:47.376469 2523870 certs.go:385] copying /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key.406aa73b -> /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key
	I0915 06:38:47.376532 2523870 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key
	I0915 06:38:47.376553 2523870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt with IP's: []
	I0915 06:38:48.296446 2523870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt ...
	I0915 06:38:48.296479 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt: {Name:mk03e5126ebac87175cd074a3278a221669ecd43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:48.296678 2523870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key ...
	I0915 06:38:48.296694 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key: {Name:mk184d4436eb1531806b2bfcf3dbee00f090f348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:38:48.296914 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 06:38:48.296959 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem (1082 bytes)
	I0915 06:38:48.296989 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:38:48.297016 2523870 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem (1675 bytes)
	I0915 06:38:48.297633 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:38:48.326882 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:38:48.352922 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:38:48.378019 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 06:38:48.403101 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:38:48.427999 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 06:38:48.452962 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:38:48.477908 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 06:38:48.503859 2523870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:38:48.530602 2523870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:38:48.549981 2523870 ssh_runner.go:195] Run: openssl version
	I0915 06:38:48.555953 2523870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:38:48.566111 2523870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:38:48.569738 2523870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:38 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:38:48.569808 2523870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:38:48.577078 2523870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:38:48.587122 2523870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:38:48.590775 2523870 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:38:48.590821 2523870 kubeadm.go:392] StartCluster: {Name:addons-078133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-078133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:38:48.590906 2523870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 06:38:48.590965 2523870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 06:38:48.629289 2523870 cri.go:89] found id: ""
	I0915 06:38:48.629429 2523870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:38:48.638918 2523870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:38:48.648246 2523870 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 06:38:48.648316 2523870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:38:48.657387 2523870 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:38:48.657405 2523870 kubeadm.go:157] found existing configuration files:
	
	I0915 06:38:48.657462 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:38:48.666518 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:38:48.666640 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:38:48.675439 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:38:48.684448 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:38:48.684566 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:38:48.693351 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:38:48.702264 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:38:48.702338 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:38:48.711186 2523870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:38:48.720567 2523870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:38:48.720649 2523870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:38:48.730182 2523870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 06:38:48.780919 2523870 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:38:48.781052 2523870 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:38:48.802135 2523870 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0915 06:38:48.802289 2523870 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0915 06:38:48.802372 2523870 kubeadm.go:310] OS: Linux
	I0915 06:38:48.802466 2523870 kubeadm.go:310] CGROUPS_CPU: enabled
	I0915 06:38:48.802552 2523870 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0915 06:38:48.802630 2523870 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0915 06:38:48.802710 2523870 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0915 06:38:48.802818 2523870 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0915 06:38:48.802915 2523870 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0915 06:38:48.803014 2523870 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0915 06:38:48.803111 2523870 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0915 06:38:48.803189 2523870 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0915 06:38:48.874483 2523870 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:38:48.874665 2523870 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:38:48.874796 2523870 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:38:48.883798 2523870 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:38:48.887479 2523870 out.go:235]   - Generating certificates and keys ...
	I0915 06:38:48.887581 2523870 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:38:48.887682 2523870 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:38:49.339220 2523870 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:38:49.759961 2523870 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:38:49.944078 2523870 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:38:50.140723 2523870 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:38:50.666643 2523870 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:38:50.666794 2523870 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-078133 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:38:51.163173 2523870 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:38:51.163312 2523870 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-078133 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:38:52.181466 2523870 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:38:53.099402 2523870 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:38:53.475256 2523870 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:38:53.475495 2523870 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:38:53.868399 2523870 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:38:54.581730 2523870 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:38:55.110775 2523870 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:38:55.547546 2523870 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:38:55.827561 2523870 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:38:55.828306 2523870 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:38:55.831902 2523870 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:38:55.835154 2523870 out.go:235]   - Booting up control plane ...
	I0915 06:38:55.835337 2523870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:38:55.835455 2523870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:38:55.836739 2523870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:38:55.846862 2523870 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:38:55.852654 2523870 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:38:55.852715 2523870 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:38:55.945745 2523870 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:38:55.945867 2523870 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:38:56.449913 2523870 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.018783ms
	I0915 06:38:56.450000 2523870 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:39:03.453388 2523870 kubeadm.go:310] [api-check] The API server is healthy after 7.001427516s
	I0915 06:39:03.470476 2523870 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:39:03.486771 2523870 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:39:03.522770 2523870 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:39:03.522970 2523870 kubeadm.go:310] [mark-control-plane] Marking the node addons-078133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:39:03.536015 2523870 kubeadm.go:310] [bootstrap-token] Using token: 4rqqjy.4t6rodzggmhhv6z7
	I0915 06:39:03.540612 2523870 out.go:235]   - Configuring RBAC rules ...
	I0915 06:39:03.540745 2523870 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:39:03.546080 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:39:03.556664 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:39:03.561376 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:39:03.565561 2523870 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:39:03.569472 2523870 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:39:03.858387 2523870 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:39:04.293335 2523870 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:39:04.857982 2523870 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:39:04.859195 2523870 kubeadm.go:310] 
	I0915 06:39:04.859277 2523870 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:39:04.859289 2523870 kubeadm.go:310] 
	I0915 06:39:04.859390 2523870 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:39:04.859410 2523870 kubeadm.go:310] 
	I0915 06:39:04.859436 2523870 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:39:04.859496 2523870 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:39:04.859547 2523870 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:39:04.859551 2523870 kubeadm.go:310] 
	I0915 06:39:04.859605 2523870 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:39:04.859610 2523870 kubeadm.go:310] 
	I0915 06:39:04.859656 2523870 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:39:04.859661 2523870 kubeadm.go:310] 
	I0915 06:39:04.859713 2523870 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:39:04.859787 2523870 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:39:04.859854 2523870 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:39:04.859859 2523870 kubeadm.go:310] 
	I0915 06:39:04.859942 2523870 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:39:04.860018 2523870 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:39:04.860024 2523870 kubeadm.go:310] 
	I0915 06:39:04.860106 2523870 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4rqqjy.4t6rodzggmhhv6z7 \
	I0915 06:39:04.860208 2523870 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f02174f41dc6c5be174745b50e9cc9798f9f759608b7a0f4d9403600d367dc26 \
	I0915 06:39:04.860228 2523870 kubeadm.go:310] 	--control-plane 
	I0915 06:39:04.860233 2523870 kubeadm.go:310] 
	I0915 06:39:04.860316 2523870 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:39:04.860321 2523870 kubeadm.go:310] 
	I0915 06:39:04.860401 2523870 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4rqqjy.4t6rodzggmhhv6z7 \
	I0915 06:39:04.860502 2523870 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f02174f41dc6c5be174745b50e9cc9798f9f759608b7a0f4d9403600d367dc26 
	I0915 06:39:04.863766 2523870 kubeadm.go:310] W0915 06:38:48.777179    1185 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:39:04.864101 2523870 kubeadm.go:310] W0915 06:38:48.777944    1185 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:39:04.864322 2523870 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0915 06:39:04.864429 2523870 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:39:04.864452 2523870 cni.go:84] Creating CNI manager for ""
	I0915 06:39:04.864461 2523870 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:39:04.867489 2523870 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0915 06:39:04.870221 2523870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0915 06:39:04.874336 2523870 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0915 06:39:04.874362 2523870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0915 06:39:04.894284 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0915 06:39:05.208677 2523870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:39:05.208832 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:05.208913 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-078133 minikube.k8s.io/updated_at=2024_09_15T06_39_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-078133 minikube.k8s.io/primary=true
	I0915 06:39:05.363687 2523870 ops.go:34] apiserver oom_adj: -16
	I0915 06:39:05.363789 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:05.864408 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:06.363995 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:06.864868 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:07.364405 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:07.864339 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:08.364323 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:08.863944 2523870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:39:09.038552 2523870 kubeadm.go:1113] duration metric: took 3.829784576s to wait for elevateKubeSystemPrivileges
	I0915 06:39:09.038581 2523870 kubeadm.go:394] duration metric: took 20.447764237s to StartCluster
	I0915 06:39:09.038600 2523870 settings.go:142] acquiring lock: {Name:mka250035ae8fe54edf72ffd2d620ea51b449138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:39:09.038726 2523870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 06:39:09.039111 2523870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/kubeconfig: {Name:mkc3f194059147bb4fbadd341bbbabf67fee0985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:39:09.039939 2523870 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:39:09.040131 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:39:09.040325 2523870 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:39:09.040408 2523870 config.go:182] Loaded profile config "addons-078133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:39:09.040435 2523870 addons.go:69] Setting yakd=true in profile "addons-078133"
	I0915 06:39:09.040446 2523870 addons.go:69] Setting inspektor-gadget=true in profile "addons-078133"
	I0915 06:39:09.040451 2523870 addons.go:234] Setting addon yakd=true in "addons-078133"
	I0915 06:39:09.040456 2523870 addons.go:234] Setting addon inspektor-gadget=true in "addons-078133"
	I0915 06:39:09.040480 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.040485 2523870 addons.go:69] Setting cloud-spanner=true in profile "addons-078133"
	I0915 06:39:09.040495 2523870 addons.go:234] Setting addon cloud-spanner=true in "addons-078133"
	I0915 06:39:09.040508 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041050 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041482 2523870 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-078133"
	I0915 06:39:09.041560 2523870 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-078133"
	I0915 06:39:09.041613 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041647 2523870 addons.go:69] Setting metrics-server=true in profile "addons-078133"
	I0915 06:39:09.041912 2523870 addons.go:234] Setting addon metrics-server=true in "addons-078133"
	I0915 06:39:09.041934 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.042360 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.042974 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041662 2523870 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-078133"
	I0915 06:39:09.043422 2523870 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-078133"
	I0915 06:39:09.043458 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.044071 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.052905 2523870 out.go:177] * Verifying Kubernetes components...
	I0915 06:39:09.041670 2523870 addons.go:69] Setting registry=true in profile "addons-078133"
	I0915 06:39:09.053360 2523870 addons.go:234] Setting addon registry=true in "addons-078133"
	I0915 06:39:09.041677 2523870 addons.go:69] Setting storage-provisioner=true in profile "addons-078133"
	I0915 06:39:09.053594 2523870 addons.go:234] Setting addon storage-provisioner=true in "addons-078133"
	I0915 06:39:09.053698 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041685 2523870 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-078133"
	I0915 06:39:09.056926 2523870 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-078133"
	I0915 06:39:09.057295 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.062965 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041693 2523870 addons.go:69] Setting volcano=true in profile "addons-078133"
	I0915 06:39:09.065091 2523870 addons.go:234] Setting addon volcano=true in "addons-078133"
	I0915 06:39:09.065130 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.065593 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.063209 2523870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:39:09.040480 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.041789 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041702 2523870 addons.go:69] Setting volumesnapshots=true in profile "addons-078133"
	I0915 06:39:09.085273 2523870 addons.go:234] Setting addon volumesnapshots=true in "addons-078133"
	I0915 06:39:09.085333 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.085846 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041796 2523870 addons.go:69] Setting gcp-auth=true in profile "addons-078133"
	I0915 06:39:09.086076 2523870 mustload.go:65] Loading cluster: addons-078133
	I0915 06:39:09.086239 2523870 config.go:182] Loaded profile config "addons-078133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:39:09.086465 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041801 2523870 addons.go:69] Setting default-storageclass=true in profile "addons-078133"
	I0915 06:39:09.094560 2523870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-078133"
	I0915 06:39:09.094904 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041806 2523870 addons.go:69] Setting ingress=true in profile "addons-078133"
	I0915 06:39:09.105001 2523870 addons.go:234] Setting addon ingress=true in "addons-078133"
	I0915 06:39:09.105055 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.105584 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.041811 2523870 addons.go:69] Setting ingress-dns=true in profile "addons-078133"
	I0915 06:39:09.105828 2523870 addons.go:234] Setting addon ingress-dns=true in "addons-078133"
	I0915 06:39:09.105864 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.106291 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.063670 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.139706 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.157805 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.241029 2523870 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:39:09.244895 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:39:09.244991 2523870 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:39:09.245101 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.252566 2523870 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:39:09.255882 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:39:09.255913 2523870 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:39:09.255985 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.305949 2523870 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:39:09.309848 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:39:09.310085 2523870 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:39:09.310113 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:39:09.310186 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.322978 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:39:09.329149 2523870 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-078133"
	I0915 06:39:09.329212 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.329744 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.346286 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:39:09.349169 2523870 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:39:09.349337 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:39:09.349376 2523870 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:39:09.349484 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.354629 2523870 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:39:09.354704 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:39:09.354789 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.367623 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:39:09.389092 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:39:09.389347 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:39:09.389610 2523870 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:39:09.389626 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:39:09.389688 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	W0915 06:39:09.391591 2523870 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0915 06:39:09.391963 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.396501 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:39:09.398337 2523870 addons.go:234] Setting addon default-storageclass=true in "addons-078133"
	I0915 06:39:09.398383 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:09.398799 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:09.406062 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:39:09.406277 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 06:39:09.406914 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:39:09.411306 2523870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:39:09.411331 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:39:09.411398 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.432227 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:39:09.434825 2523870 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:39:09.435043 2523870 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:39:09.435065 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:39:09.435134 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.437472 2523870 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:39:09.437496 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:39:09.437566 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.453082 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:39:09.457762 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:39:09.462413 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:39:09.468969 2523870 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:39:09.471555 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:39:09.471593 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:39:09.471669 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.482934 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:39:09.483223 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.484125 2523870 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:39:09.487259 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:39:09.487279 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:39:09.487344 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.520984 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.593269 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.596982 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.597062 2523870 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:39:09.599402 2523870 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:39:09.599428 2523870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:39:09.599501 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.602275 2523870 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:39:09.604798 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.607521 2523870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:39:09.607774 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:39:09.608168 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:09.621024 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.634782 2523870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:39:09.641915 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.644998 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.679310 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.699858 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.709617 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.725574 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.726343 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:09.967170 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:39:09.967196 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:39:10.051753 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:39:10.051784 2523870 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:39:10.123585 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:39:10.131017 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:39:10.155112 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:39:10.155140 2523870 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:39:10.162216 2523870 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:39:10.162242 2523870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:39:10.168215 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:39:10.200571 2523870 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:39:10.200648 2523870 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:39:10.204330 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:39:10.207613 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:39:10.207693 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:39:10.221132 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:39:10.221213 2523870 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:39:10.229090 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:39:10.232441 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:39:10.236135 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:39:10.253555 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:39:10.253632 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:39:10.314939 2523870 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:39:10.315016 2523870 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:39:10.319329 2523870 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:39:10.319406 2523870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:39:10.359489 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:39:10.359560 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:39:10.377308 2523870 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:39:10.377381 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:39:10.388486 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:39:10.388563 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:39:10.430613 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:39:10.430693 2523870 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:39:10.536291 2523870 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:39:10.536370 2523870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:39:10.546167 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:39:10.563456 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:39:10.563540 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:39:10.590878 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:39:10.595036 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:39:10.595130 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:39:10.651963 2523870 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:39:10.652038 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:39:10.780564 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:39:10.780649 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:39:10.783802 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:39:10.783880 2523870 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:39:10.787389 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:39:10.787467 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:39:10.855263 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:39:10.910709 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:39:10.910790 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:39:10.943539 2523870 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:39:10.943619 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:39:10.947004 2523870 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:39:10.947081 2523870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:39:10.975982 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:39:10.976062 2523870 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:39:11.041384 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:39:11.041456 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:39:11.041859 2523870 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:39:11.041910 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:39:11.067123 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:39:11.169804 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:39:11.187844 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:39:11.187928 2523870 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:39:11.413987 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:39:11.414061 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:39:11.545139 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:39:11.545161 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:39:11.690868 2523870 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:39:11.690891 2523870 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:39:11.861968 2523870 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.378992448s)
	I0915 06:39:11.861995 2523870 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0915 06:39:11.863108 2523870 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.22829938s)
	I0915 06:39:11.863907 2523870 node_ready.go:35] waiting up to 6m0s for node "addons-078133" to be "Ready" ...
	I0915 06:39:11.925007 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:39:12.734191 2523870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-078133" context rescaled to 1 replicas
	I0915 06:39:13.816313 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.692684755s)
	I0915 06:39:13.816426 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.685386035s)
	I0915 06:39:13.816486 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.648202296s)
	I0915 06:39:13.948928 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:14.413876 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.209453947s)
	I0915 06:39:15.491159 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.261979832s)
	I0915 06:39:15.491246 2523870 addons.go:475] Verifying addon ingress=true in "addons-078133"
	I0915 06:39:15.491560 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.259043386s)
	I0915 06:39:15.491668 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.255460851s)
	I0915 06:39:15.491897 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.945656931s)
	I0915 06:39:15.491911 2523870 addons.go:475] Verifying addon metrics-server=true in "addons-078133"
	I0915 06:39:15.491940 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.900983898s)
	I0915 06:39:15.491947 2523870 addons.go:475] Verifying addon registry=true in "addons-078133"
	I0915 06:39:15.492354 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.637011622s)
	I0915 06:39:15.492468 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.425238269s)
	I0915 06:39:15.492570 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.322686637s)
	W0915 06:39:15.492507 2523870 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:39:15.492702 2523870 retry.go:31] will retry after 365.365183ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:39:15.494865 2523870 out.go:177] * Verifying registry addon...
	I0915 06:39:15.494883 2523870 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-078133 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:39:15.494996 2523870 out.go:177] * Verifying ingress addon...
	I0915 06:39:15.499126 2523870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:39:15.499146 2523870 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:39:15.508673 2523870 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:39:15.508703 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:15.509966 2523870 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:39:15.510037 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0915 06:39:15.524385 2523870 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0915 06:39:15.858832 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:39:15.879445 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.954334967s)
	I0915 06:39:15.879493 2523870 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-078133"
	I0915 06:39:15.882304 2523870 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:39:15.886174 2523870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:39:15.939391 2523870 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:39:15.939465 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:16.048275 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:16.059314 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:16.367719 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:16.390881 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:16.513275 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:16.521440 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:16.891066 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:17.005641 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:17.007645 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:17.130505 2523870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.27161243s)
	I0915 06:39:17.390841 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:17.503165 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:17.504695 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:17.890914 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:18.008065 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:18.009583 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:18.371574 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:18.390782 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:18.506247 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:18.506438 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:18.560915 2523870 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:39:18.560997 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:18.579856 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:18.744915 2523870 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:39:18.764474 2523870 addons.go:234] Setting addon gcp-auth=true in "addons-078133"
	I0915 06:39:18.764523 2523870 host.go:66] Checking if "addons-078133" exists ...
	I0915 06:39:18.765025 2523870 cli_runner.go:164] Run: docker container inspect addons-078133 --format={{.State.Status}}
	I0915 06:39:18.782156 2523870 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:39:18.782213 2523870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-078133
	I0915 06:39:18.801456 2523870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35748 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/addons-078133/id_rsa Username:docker}
	I0915 06:39:18.904312 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:18.904653 2523870 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:39:18.907445 2523870 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:39:18.910534 2523870 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:39:18.910565 2523870 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:39:18.936545 2523870 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:39:18.936579 2523870 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:39:18.963991 2523870 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:39:18.964067 2523870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:39:19.000463 2523870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:39:19.016170 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:19.018516 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:19.395257 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:19.504167 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:19.505568 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:19.690148 2523870 addons.go:475] Verifying addon gcp-auth=true in "addons-078133"
	I0915 06:39:19.694850 2523870 out.go:177] * Verifying gcp-auth addon...
	I0915 06:39:19.714020 2523870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:39:19.735242 2523870 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:39:19.735265 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:19.889636 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:20.006962 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:20.015633 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:20.219761 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:20.390783 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:20.503049 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:20.503934 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:20.717230 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:20.867048 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:20.890525 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:21.008560 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:21.010633 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:21.218675 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:21.398063 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:21.503634 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:21.505331 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:21.718256 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:21.891285 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:22.004961 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:22.006610 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:22.219382 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:22.391119 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:22.505105 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:22.506699 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:22.718469 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:22.868045 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:22.891039 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:23.006023 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:23.007330 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:23.217716 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:23.392441 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:23.504360 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:23.505442 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:23.718077 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:23.890026 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:24.009952 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:24.011764 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:24.217196 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:24.390856 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:24.503823 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:24.504306 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:24.717265 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:24.890322 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:25.004815 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:25.009217 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:25.218931 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:25.368330 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:25.390248 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:25.504490 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:25.504784 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:25.718031 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:25.889897 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:26.006178 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:26.009321 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:26.217851 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:26.390260 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:26.503645 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:26.503929 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:26.717228 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:26.889966 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:27.005860 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:27.006534 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:27.217232 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:27.391379 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:27.503218 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:27.504180 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:27.717918 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:27.867581 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:27.890599 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:28.008041 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:28.010528 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:28.218488 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:28.390431 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:28.503223 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:28.503754 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:28.718274 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:28.890278 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:29.004652 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:29.006990 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:29.217428 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:29.390775 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:29.503442 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:29.504951 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:29.717347 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:29.867767 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:29.889736 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:30.013658 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:30.013836 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:30.219186 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:30.391799 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:30.503268 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:30.504148 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:30.717747 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:30.890714 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:31.004930 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:31.005992 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:31.217720 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:31.390558 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:31.503622 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:31.504583 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:31.718229 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:31.890555 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:32.008758 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:32.009715 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:32.217800 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:32.367710 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:32.389503 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:32.504290 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:32.504617 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:32.718358 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:32.890232 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:33.013792 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:33.014310 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:33.217772 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:33.389964 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:33.503854 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:33.504297 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:33.718265 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:33.890626 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:34.005812 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:34.007225 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:34.218580 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:34.368052 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:34.389929 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:34.502638 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:34.503613 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:34.718366 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:34.891557 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:35.009694 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:35.021653 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:35.218731 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:35.390461 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:35.504550 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:35.506436 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:35.718202 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:35.890352 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:36.006752 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:36.008736 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:36.217910 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:36.390208 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:36.503044 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:36.503488 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:36.717595 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:36.867872 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:36.890611 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:37.007512 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:37.008318 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:37.217196 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:37.389970 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:37.502759 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:37.503952 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:37.717068 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:37.890324 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:38.008794 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:38.009771 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:38.217829 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:38.389937 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:38.503592 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:38.504486 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:38.717991 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:38.890450 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:39.008193 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:39.009653 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:39.226065 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:39.367638 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:39.390621 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:39.507715 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:39.508472 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:39.718445 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:39.890449 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:40.011215 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:40.031551 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:40.218036 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:40.390520 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:40.506183 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:40.507671 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:40.718484 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:40.889891 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:41.006703 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:41.007677 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:41.217954 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:41.368038 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:41.390857 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:41.502948 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:41.503795 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:41.723269 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:41.890629 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:42.009905 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:42.010464 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:42.217795 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:42.390908 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:42.503860 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:42.504836 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:42.717714 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:42.890761 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:43.007858 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:43.008735 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:43.217902 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:43.389922 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:43.502784 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:43.503593 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:43.717585 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:43.868251 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:43.890507 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:44.014356 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:44.014574 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:44.218704 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:44.390683 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:44.503015 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:44.503922 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:44.717370 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:44.890339 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:45.006474 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:45.008151 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:45.218416 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:45.390283 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:45.503879 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:45.504683 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:45.717454 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:45.890475 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:46.008464 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:46.011999 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:46.217682 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:46.367996 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:46.390451 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:46.503110 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:46.504008 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:46.717277 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:46.890358 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:47.006411 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:47.007378 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:47.217355 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:47.390037 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:47.503022 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:47.503858 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:47.717276 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:47.890100 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:48.011525 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:48.014501 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:48.217881 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:48.390415 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:48.502868 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:48.503714 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:48.717603 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:48.868116 2523870 node_ready.go:53] node "addons-078133" has status "Ready":"False"
	I0915 06:39:48.889580 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:49.007659 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:49.008613 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:49.221630 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:49.390355 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:49.503859 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:49.504764 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:49.717278 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:49.890162 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:50.016362 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:50.016914 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:50.218199 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:50.390287 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:50.503347 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:50.504044 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:50.717043 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:50.890485 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:51.049786 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:51.062794 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:51.224379 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:51.397876 2523870 node_ready.go:49] node "addons-078133" has status "Ready":"True"
	I0915 06:39:51.397903 2523870 node_ready.go:38] duration metric: took 39.533978864s for node "addons-078133" to be "Ready" ...
	I0915 06:39:51.397914 2523870 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:39:51.427264 2523870 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:39:51.427292 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:51.464114 2523870 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7vkbz" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:51.590510 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:51.591035 2523870 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:39:51.591054 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:51.769687 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:51.901853 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:52.030916 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:52.032462 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:52.223429 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:52.391680 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:52.523484 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:52.524528 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:52.718617 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:52.891172 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:52.971134 2523870 pod_ready.go:93] pod "coredns-7c65d6cfc9-7vkbz" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.971160 2523870 pod_ready.go:82] duration metric: took 1.507009842s for pod "coredns-7c65d6cfc9-7vkbz" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.971209 2523870 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.977562 2523870 pod_ready.go:93] pod "etcd-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.977605 2523870 pod_ready.go:82] duration metric: took 6.380539ms for pod "etcd-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.977622 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.984413 2523870 pod_ready.go:93] pod "kube-apiserver-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.984443 2523870 pod_ready.go:82] duration metric: took 6.771659ms for pod "kube-apiserver-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.984456 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.990371 2523870 pod_ready.go:93] pod "kube-controller-manager-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.990397 2523870 pod_ready.go:82] duration metric: took 5.931499ms for pod "kube-controller-manager-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.990414 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fjj4k" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.996392 2523870 pod_ready.go:93] pod "kube-proxy-fjj4k" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:52.996424 2523870 pod_ready.go:82] duration metric: took 6.001429ms for pod "kube-proxy-fjj4k" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:52.996438 2523870 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:53.009143 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:53.010564 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:53.218339 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:53.368479 2523870 pod_ready.go:93] pod "kube-scheduler-addons-078133" in "kube-system" namespace has status "Ready":"True"
	I0915 06:39:53.368505 2523870 pod_ready.go:82] duration metric: took 372.058726ms for pod "kube-scheduler-addons-078133" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:53.368517 2523870 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace to be "Ready" ...
	I0915 06:39:53.391482 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:53.508086 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:53.509396 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:53.719334 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:53.893534 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:54.008069 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:54.009214 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:54.220473 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:54.393145 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:54.506031 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:54.515648 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:54.718589 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:54.892614 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:55.007453 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:55.010827 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:55.222250 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:55.376527 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:39:55.392570 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:55.506637 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:55.508411 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:55.718235 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:55.891769 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:56.006852 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:56.009587 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:56.219174 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:56.390762 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:56.504692 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:56.506044 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:56.718089 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:56.901935 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:57.005894 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:57.007119 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:57.218515 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:57.392369 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:57.506920 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:57.508332 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:57.717995 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:57.875345 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:39:57.892007 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:58.006101 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:58.006268 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:58.226454 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:58.392438 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:58.506852 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:58.507582 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:58.718390 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:58.893006 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:59.004892 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:59.007281 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:59.218349 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:59.391747 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:39:59.507785 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:39:59.511002 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:39:59.718650 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:59.876003 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:39:59.892455 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:00.007347 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:00.009528 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:00.245436 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:00.508623 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:00.535863 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:00.537735 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:00.723119 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:00.901726 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:01.012175 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:01.013228 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:01.223627 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:01.397325 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:01.508050 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:01.509577 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:01.719168 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:01.876338 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:01.893359 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:02.016637 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:02.019038 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:02.219910 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:02.392659 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:02.529881 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:02.531435 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:02.719132 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:02.893546 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:03.012685 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:03.014579 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:03.224218 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:03.391738 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:03.508749 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:03.512180 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:03.719109 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:03.876617 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:03.893892 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:04.012887 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:04.014341 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:04.218097 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:04.392063 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:04.503904 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:04.504946 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:04.717690 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:04.891182 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:05.010877 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:05.011628 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:05.217387 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:05.399458 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:05.505163 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:05.506344 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:05.721686 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:05.876868 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:05.893999 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:06.009105 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:06.010539 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:06.218863 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:06.391805 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:06.504869 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:06.505897 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:06.717807 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:06.900869 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:07.011645 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:07.012942 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:07.217184 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:07.391107 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:07.504957 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:07.505322 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:07.717633 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:07.899952 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:08.011925 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:08.013069 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:08.217268 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:08.376650 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:08.397803 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:08.505492 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:08.506686 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:08.718464 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:08.891562 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:09.005433 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:09.007473 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:09.218676 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:09.393023 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:09.504274 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:09.504893 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:09.720362 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:09.900991 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:10.009437 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:10.010607 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:10.217916 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:10.391420 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:10.503362 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:10.504726 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:10.718554 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:10.875439 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:10.891030 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:11.006830 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:11.007545 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:11.218297 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:11.394784 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:11.505674 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:11.507120 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:11.717797 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:11.892090 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:12.012833 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:12.014665 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:12.218750 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:12.391423 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:12.504227 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:12.505056 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:12.717972 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:12.891091 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:13.004369 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:13.006898 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:13.217462 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:13.375022 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:13.391234 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:13.505887 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:13.509132 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:13.719365 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:13.892337 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:14.027805 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:14.029543 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:14.218097 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:14.394284 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:14.503684 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:14.504768 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:14.720283 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:14.891679 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:15.005388 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:15.108689 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:15.218457 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:15.375762 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:15.392211 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:15.504886 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:15.505624 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:15.717476 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:15.891681 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:16.009431 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:16.012968 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:16.218788 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:16.391091 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:16.505725 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:16.508000 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:16.719209 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:16.893291 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:17.011839 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:17.012867 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:17.219510 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:17.376009 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:17.392084 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:17.506117 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:17.509472 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:17.718736 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:17.892359 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:18.011278 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:18.011976 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:18.218284 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:18.391739 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:18.504420 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:18.505593 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:18.718246 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:18.891814 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:19.009582 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:19.010144 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:19.217852 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:19.391270 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:19.505094 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:19.505450 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:19.717938 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:19.876031 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:19.892583 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:20.022672 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:20.023496 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:20.219111 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:20.391707 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:20.504488 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:20.505535 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:20.735971 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:20.894400 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:21.005148 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:21.006658 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:21.218083 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:21.392231 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:21.505987 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:21.507535 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:21.719497 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:21.876166 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:21.895827 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:22.005926 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:22.015854 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:22.218563 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:22.392508 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:22.505920 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:22.507345 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:22.721627 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:22.891650 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:23.007542 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:23.011624 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:23.218496 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:23.424380 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:23.517867 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:23.519670 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:23.717708 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:23.877493 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:23.892213 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:24.009293 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:24.010054 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:24.218495 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:24.391439 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:24.505968 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:40:24.507321 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:24.718282 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:24.892049 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:25.021077 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:25.027241 2523870 kapi.go:107] duration metric: took 1m9.528110217s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:40:25.217764 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:25.390797 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:25.503618 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:25.717901 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:25.893381 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:26.009074 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:26.217567 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:26.374885 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:26.391801 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:26.503999 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:26.722475 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:26.890983 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:27.006887 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:27.219513 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:27.392340 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:27.504077 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:27.718269 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:27.892904 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:28.004023 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:28.219042 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:28.376299 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:28.399220 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:28.504498 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:28.718964 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:28.896135 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:29.006026 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:29.218032 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:29.393178 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:29.509539 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:29.718139 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:29.893776 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:30.005062 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:30.234708 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:30.393094 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:30.505057 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:30.718540 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:30.876680 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:30.893933 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:31.008054 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:31.219075 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:31.404942 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:31.505691 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:31.718932 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:31.893105 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:32.009801 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:32.219037 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:32.393111 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:32.504180 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:32.719026 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:32.876996 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:32.892930 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:33.005692 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:33.217717 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:33.391361 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:33.504310 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:33.718712 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:33.891841 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:34.005309 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:34.219141 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:34.423022 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:34.503613 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:34.726243 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:34.896767 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:35.004767 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:35.218452 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:35.378703 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:35.398054 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:35.504269 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:35.719379 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:35.896417 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:36.020512 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:36.218661 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:36.393103 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:36.505162 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:36.718101 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:36.895403 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:37.007273 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:37.218042 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:37.392145 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:37.503483 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:37.718902 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:37.875591 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:37.891548 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:38.005969 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:38.217510 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:38.391997 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:38.503726 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:38.718614 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:38.891369 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:39.005328 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:39.217328 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:39.391927 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:39.504617 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:39.718749 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:39.876161 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:39.891185 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:40.004226 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:40.218071 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:40.392301 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:40.505556 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:40.717967 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:40.892236 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:41.005881 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:41.218764 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:41.395672 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:41.503746 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:41.719115 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:41.876921 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:41.895525 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:42.011166 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:42.218028 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:42.392438 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:42.503989 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:42.718426 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:42.891965 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:43.005470 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:43.218325 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:43.391674 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:43.503672 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:43.718546 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:43.891279 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:44.009592 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:44.218862 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:44.377134 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:44.391140 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:44.504636 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:44.718865 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:44.892732 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:45.005120 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:45.220362 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:45.393290 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:45.504799 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:45.719264 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:45.892303 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:46.010041 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:46.222170 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:40:46.392718 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:46.507034 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:46.719634 2523870 kapi.go:107] duration metric: took 1m27.005612282s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:40:46.721255 2523870 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-078133 cluster.
	I0915 06:40:46.722663 2523870 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:40:46.723801 2523870 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:40:46.876708 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:46.894513 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:47.005594 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:47.392485 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:47.504081 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:47.897917 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:48.005531 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:48.391420 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:48.503783 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:48.878884 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:48.893603 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:49.007483 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:49.391911 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:49.505584 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:49.891537 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:50.012368 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:50.392057 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:50.503606 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:50.891754 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:51.004331 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:51.379225 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:51.391873 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:51.504975 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:51.892942 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:52.069383 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:52.397630 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:52.504476 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:52.891313 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:53.011566 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:53.392684 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:53.504669 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:53.875903 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:53.891954 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:54.006138 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:54.392101 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:54.503774 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:54.899918 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:55.006756 2523870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:40:55.392260 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:55.504130 2523870 kapi.go:107] duration metric: took 1m40.004978236s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:40:55.892947 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:56.382504 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:56.392491 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:56.924548 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:57.393779 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:57.891466 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:58.392642 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:58.877042 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:40:58.891963 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:59.391610 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:40:59.893537 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:00.397105 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:00.904885 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:01.375303 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:01.391382 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:01.892308 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:02.392116 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:02.894530 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:03.375597 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:03.392955 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:03.891747 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:04.399605 2523870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:41:04.891765 2523870 kapi.go:107] duration metric: took 1m49.0055889s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:41:04.894260 2523870 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0915 06:41:04.895478 2523870 addons.go:510] duration metric: took 1m55.855150005s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0915 06:41:05.875469 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:08.377139 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:10.875168 2523870 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"False"
	I0915 06:41:11.380090 2523870 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace has status "Ready":"True"
	I0915 06:41:11.380127 2523870 pod_ready.go:82] duration metric: took 1m18.011601636s for pod "metrics-server-84c5f94fbc-gfw99" in "kube-system" namespace to be "Ready" ...
	I0915 06:41:11.380141 2523870 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-cwx62" in "kube-system" namespace to be "Ready" ...
	I0915 06:41:11.415635 2523870 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-cwx62" in "kube-system" namespace has status "Ready":"True"
	I0915 06:41:11.415662 2523870 pod_ready.go:82] duration metric: took 35.513361ms for pod "nvidia-device-plugin-daemonset-cwx62" in "kube-system" namespace to be "Ready" ...
	I0915 06:41:11.415685 2523870 pod_ready.go:39] duration metric: took 1m20.01772025s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:41:11.415708 2523870 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:41:11.415741 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:41:11.415815 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:41:11.495394 2523870 cri.go:89] found id: "e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:11.495424 2523870 cri.go:89] found id: ""
	I0915 06:41:11.495434 2523870 logs.go:276] 1 containers: [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6]
	I0915 06:41:11.495517 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.499500 2523870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:41:11.499585 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:41:11.550559 2523870 cri.go:89] found id: "aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:11.550594 2523870 cri.go:89] found id: ""
	I0915 06:41:11.550603 2523870 logs.go:276] 1 containers: [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6]
	I0915 06:41:11.550667 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.554309 2523870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:41:11.554399 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:41:11.601798 2523870 cri.go:89] found id: "85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:11.601821 2523870 cri.go:89] found id: ""
	I0915 06:41:11.601829 2523870 logs.go:276] 1 containers: [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c]
	I0915 06:41:11.601888 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.605508 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:41:11.605625 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:41:11.647917 2523870 cri.go:89] found id: "9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:11.647991 2523870 cri.go:89] found id: ""
	I0915 06:41:11.648013 2523870 logs.go:276] 1 containers: [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159]
	I0915 06:41:11.648110 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.651911 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:41:11.652032 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:41:11.698154 2523870 cri.go:89] found id: "7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:11.698186 2523870 cri.go:89] found id: ""
	I0915 06:41:11.698195 2523870 logs.go:276] 1 containers: [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee]
	I0915 06:41:11.698256 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.701917 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:41:11.701995 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:41:11.746530 2523870 cri.go:89] found id: "fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:11.746597 2523870 cri.go:89] found id: ""
	I0915 06:41:11.746615 2523870 logs.go:276] 1 containers: [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1]
	I0915 06:41:11.746685 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.750359 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:41:11.750457 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:41:11.793770 2523870 cri.go:89] found id: "0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:11.793794 2523870 cri.go:89] found id: ""
	I0915 06:41:11.793802 2523870 logs.go:276] 1 containers: [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725]
	I0915 06:41:11.793884 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:11.797463 2523870 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:41:11.797492 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:41:11.992092 2523870 logs.go:123] Gathering logs for etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] ...
	I0915 06:41:11.992123 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:12.054295 2523870 logs.go:123] Gathering logs for kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] ...
	I0915 06:41:12.054337 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:12.107869 2523870 logs.go:123] Gathering logs for kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] ...
	I0915 06:41:12.107906 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:12.152727 2523870 logs.go:123] Gathering logs for container status ...
	I0915 06:41:12.152760 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:41:12.209277 2523870 logs.go:123] Gathering logs for kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] ...
	I0915 06:41:12.209313 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:12.282525 2523870 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:41:12.282570 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:41:12.379304 2523870 logs.go:123] Gathering logs for kubelet ...
	I0915 06:41:12.379387 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0915 06:41:12.452980 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028288    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.453256 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.453428 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.453641 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.453826 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.454053 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:12.488341 2523870 logs.go:123] Gathering logs for dmesg ...
	I0915 06:41:12.488390 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:41:12.506041 2523870 logs.go:123] Gathering logs for kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] ...
	I0915 06:41:12.506071 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:12.563059 2523870 logs.go:123] Gathering logs for coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] ...
	I0915 06:41:12.563096 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:12.606199 2523870 logs.go:123] Gathering logs for kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] ...
	I0915 06:41:12.606234 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:12.648655 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:12.648683 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0915 06:41:12.648741 2523870 out.go:270] X Problems detected in kubelet:
	W0915 06:41:12.648758 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.648765 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.648780 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:12.648787 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:12.648799 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:12.648833 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:12.648843 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:41:22.649917 2523870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:41:22.664122 2523870 api_server.go:72] duration metric: took 2m13.624140746s to wait for apiserver process to appear ...
	I0915 06:41:22.664149 2523870 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:41:22.664188 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:41:22.664251 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:41:22.715271 2523870 cri.go:89] found id: "e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:22.715298 2523870 cri.go:89] found id: ""
	I0915 06:41:22.715308 2523870 logs.go:276] 1 containers: [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6]
	I0915 06:41:22.715367 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.718981 2523870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:41:22.719054 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:41:22.758523 2523870 cri.go:89] found id: "aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:22.758548 2523870 cri.go:89] found id: ""
	I0915 06:41:22.758558 2523870 logs.go:276] 1 containers: [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6]
	I0915 06:41:22.758622 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.762372 2523870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:41:22.762450 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:41:22.803919 2523870 cri.go:89] found id: "85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:22.803939 2523870 cri.go:89] found id: ""
	I0915 06:41:22.803946 2523870 logs.go:276] 1 containers: [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c]
	I0915 06:41:22.804003 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.807829 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:41:22.807902 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:41:22.846386 2523870 cri.go:89] found id: "9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:22.846461 2523870 cri.go:89] found id: ""
	I0915 06:41:22.846477 2523870 logs.go:276] 1 containers: [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159]
	I0915 06:41:22.846550 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.850418 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:41:22.850502 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:41:22.894080 2523870 cri.go:89] found id: "7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:22.894105 2523870 cri.go:89] found id: ""
	I0915 06:41:22.894113 2523870 logs.go:276] 1 containers: [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee]
	I0915 06:41:22.894173 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.898275 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:41:22.898353 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:41:22.938696 2523870 cri.go:89] found id: "fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:22.938717 2523870 cri.go:89] found id: ""
	I0915 06:41:22.938725 2523870 logs.go:276] 1 containers: [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1]
	I0915 06:41:22.938785 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.942715 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:41:22.942798 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:41:22.990421 2523870 cri.go:89] found id: "0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:22.990492 2523870 cri.go:89] found id: ""
	I0915 06:41:22.990514 2523870 logs.go:276] 1 containers: [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725]
	I0915 06:41:22.990602 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:22.994406 2523870 logs.go:123] Gathering logs for kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] ...
	I0915 06:41:22.994433 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:23.073513 2523870 logs.go:123] Gathering logs for etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] ...
	I0915 06:41:23.073551 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:23.141989 2523870 logs.go:123] Gathering logs for kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] ...
	I0915 06:41:23.142067 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:23.197032 2523870 logs.go:123] Gathering logs for kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] ...
	I0915 06:41:23.197109 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:23.242720 2523870 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:41:23.242756 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:41:23.337137 2523870 logs.go:123] Gathering logs for container status ...
	I0915 06:41:23.337178 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:41:23.394824 2523870 logs.go:123] Gathering logs for kubelet ...
	I0915 06:41:23.394853 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0915 06:41:23.446249 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028288    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.446518 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.446688 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.446894 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.447080 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.447305 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:23.482115 2523870 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:41:23.482149 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:41:23.634605 2523870 logs.go:123] Gathering logs for coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] ...
	I0915 06:41:23.634636 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:23.675844 2523870 logs.go:123] Gathering logs for kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] ...
	I0915 06:41:23.675873 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:23.723363 2523870 logs.go:123] Gathering logs for kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] ...
	I0915 06:41:23.723398 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:23.797568 2523870 logs.go:123] Gathering logs for dmesg ...
	I0915 06:41:23.797657 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:41:23.816018 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:23.816047 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0915 06:41:23.816107 2523870 out.go:270] X Problems detected in kubelet:
	W0915 06:41:23.816120 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.816132 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.816144 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:23.816154 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:23.816160 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:23.816172 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:23.816178 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:41:33.817587 2523870 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 06:41:33.825225 2523870 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 06:41:33.826245 2523870 api_server.go:141] control plane version: v1.31.1
	I0915 06:41:33.826278 2523870 api_server.go:131] duration metric: took 11.162120505s to wait for apiserver health ...
	I0915 06:41:33.826288 2523870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:41:33.826312 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:41:33.826381 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:41:33.865811 2523870 cri.go:89] found id: "e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:33.865838 2523870 cri.go:89] found id: ""
	I0915 06:41:33.865847 2523870 logs.go:276] 1 containers: [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6]
	I0915 06:41:33.865905 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.869614 2523870 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:41:33.869702 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:41:33.907874 2523870 cri.go:89] found id: "aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:33.907899 2523870 cri.go:89] found id: ""
	I0915 06:41:33.907907 2523870 logs.go:276] 1 containers: [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6]
	I0915 06:41:33.907963 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.911687 2523870 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:41:33.911762 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:41:33.951105 2523870 cri.go:89] found id: "85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:33.951128 2523870 cri.go:89] found id: ""
	I0915 06:41:33.951137 2523870 logs.go:276] 1 containers: [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c]
	I0915 06:41:33.951196 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.954918 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:41:33.955022 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:41:33.994550 2523870 cri.go:89] found id: "9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:33.994574 2523870 cri.go:89] found id: ""
	I0915 06:41:33.994583 2523870 logs.go:276] 1 containers: [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159]
	I0915 06:41:33.994643 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:33.998722 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:41:33.998797 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:41:34.039134 2523870 cri.go:89] found id: "7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:34.039159 2523870 cri.go:89] found id: ""
	I0915 06:41:34.039167 2523870 logs.go:276] 1 containers: [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee]
	I0915 06:41:34.039230 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:34.043267 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:41:34.043394 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:41:34.084090 2523870 cri.go:89] found id: "fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:34.084114 2523870 cri.go:89] found id: ""
	I0915 06:41:34.084123 2523870 logs.go:276] 1 containers: [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1]
	I0915 06:41:34.084176 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:34.087813 2523870 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:41:34.087891 2523870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:41:34.132606 2523870 cri.go:89] found id: "0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:34.132631 2523870 cri.go:89] found id: ""
	I0915 06:41:34.132639 2523870 logs.go:276] 1 containers: [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725]
	I0915 06:41:34.132712 2523870 ssh_runner.go:195] Run: which crictl
	I0915 06:41:34.136498 2523870 logs.go:123] Gathering logs for kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] ...
	I0915 06:41:34.136526 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159"
	I0915 06:41:34.183368 2523870 logs.go:123] Gathering logs for kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] ...
	I0915 06:41:34.183400 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee"
	I0915 06:41:34.226908 2523870 logs.go:123] Gathering logs for kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] ...
	I0915 06:41:34.226942 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1"
	I0915 06:41:34.320748 2523870 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:41:34.320790 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:41:34.423086 2523870 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:41:34.423130 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:41:34.576900 2523870 logs.go:123] Gathering logs for kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] ...
	I0915 06:41:34.576934 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6"
	I0915 06:41:34.653698 2523870 logs.go:123] Gathering logs for etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] ...
	I0915 06:41:34.653736 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6"
	I0915 06:41:34.704486 2523870 logs.go:123] Gathering logs for coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] ...
	I0915 06:41:34.704520 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c"
	I0915 06:41:34.751429 2523870 logs.go:123] Gathering logs for kubelet ...
	I0915 06:41:34.751460 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0915 06:41:34.804369 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028288    1502 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.804610 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.804777 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.804990 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.805174 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.805399 2523870 logs.go:138] Found kubelet problem: Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:34.842270 2523870 logs.go:123] Gathering logs for dmesg ...
	I0915 06:41:34.842324 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:41:34.861474 2523870 logs.go:123] Gathering logs for kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] ...
	I0915 06:41:34.861505 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725"
	I0915 06:41:34.906963 2523870 logs.go:123] Gathering logs for container status ...
	I0915 06:41:34.906995 2523870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:41:34.978748 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:34.978778 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0915 06:41:34.978858 2523870 out.go:270] X Problems detected in kubelet:
	W0915 06:41:34.978873 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028354    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.978881 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028415    1502 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-078133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.978887 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028427    1502 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	W0915 06:41:34.978894 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: W0915 06:39:51.028482    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-078133" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-078133' and this object
	W0915 06:41:34.979024 2523870 out.go:270]   Sep 15 06:39:51 addons-078133 kubelet[1502]: E0915 06:39:51.028495    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-078133\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-078133' and this object" logger="UnhandledError"
	I0915 06:41:34.979041 2523870 out.go:358] Setting ErrFile to fd 2...
	I0915 06:41:34.979048 2523870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:41:44.992518 2523870 system_pods.go:59] 18 kube-system pods found
	I0915 06:41:44.992563 2523870 system_pods.go:61] "coredns-7c65d6cfc9-7vkbz" [6ea47236-17f3-4492-8780-9ad56187f489] Running
	I0915 06:41:44.992570 2523870 system_pods.go:61] "csi-hostpath-attacher-0" [fbcdc315-eaad-4112-a529-eec22f5f7dce] Running
	I0915 06:41:44.992575 2523870 system_pods.go:61] "csi-hostpath-resizer-0" [f5efb463-f551-4dde-87d2-5ec91a566e81] Running
	I0915 06:41:44.992579 2523870 system_pods.go:61] "csi-hostpathplugin-cgcjb" [58bfa35e-116a-45b1-a414-47dadde393c6] Running
	I0915 06:41:44.992583 2523870 system_pods.go:61] "etcd-addons-078133" [b238897b-6598-4d41-915c-57e032f1b6ad] Running
	I0915 06:41:44.992589 2523870 system_pods.go:61] "kindnet-h6zsk" [9c090aa0-3e32-475a-9090-5423f0449354] Running
	I0915 06:41:44.992593 2523870 system_pods.go:61] "kube-apiserver-addons-078133" [9606256f-7a4c-47eb-91e3-29271e631613] Running
	I0915 06:41:44.992597 2523870 system_pods.go:61] "kube-controller-manager-addons-078133" [fa465a0e-97b0-4d5f-af33-a26dbf7e3985] Running
	I0915 06:41:44.992602 2523870 system_pods.go:61] "kube-ingress-dns-minikube" [d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6] Running
	I0915 06:41:44.992637 2523870 system_pods.go:61] "kube-proxy-fjj4k" [be724ff8-b220-4bfb-961c-c6cf462d9ddc] Running
	I0915 06:41:44.992646 2523870 system_pods.go:61] "kube-scheduler-addons-078133" [8a13493f-2796-4a2e-b83b-2f5f8f4f09bb] Running
	I0915 06:41:44.992651 2523870 system_pods.go:61] "metrics-server-84c5f94fbc-gfw99" [8d80d558-0f92-43df-9e1e-035dad596039] Running
	I0915 06:41:44.992655 2523870 system_pods.go:61] "nvidia-device-plugin-daemonset-cwx62" [6bc66e81-1049-45ef-b236-d0ad12ba82cf] Running
	I0915 06:41:44.992658 2523870 system_pods.go:61] "registry-66c9cd494c-dvjjx" [f6332eec-8451-4a18-b1e4-899a9c98a398] Running
	I0915 06:41:44.992662 2523870 system_pods.go:61] "registry-proxy-pph5w" [5bfdb7e0-869e-409d-b185-7e7c0d0386d6] Running
	I0915 06:41:44.992666 2523870 system_pods.go:61] "snapshot-controller-56fcc65765-6lsdb" [40abaaf0-851b-4368-bb6c-c43e5fd96b18] Running
	I0915 06:41:44.992669 2523870 system_pods.go:61] "snapshot-controller-56fcc65765-9dh55" [aac62e95-b572-45ce-ba9b-5b4451c8578b] Running
	I0915 06:41:44.992673 2523870 system_pods.go:61] "storage-provisioner" [30881b3f-dd6b-47c6-8171-db912be01758] Running
	I0915 06:41:44.992680 2523870 system_pods.go:74] duration metric: took 11.166385954s to wait for pod list to return data ...
	I0915 06:41:44.992692 2523870 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:41:44.995239 2523870 default_sa.go:45] found service account: "default"
	I0915 06:41:44.995269 2523870 default_sa.go:55] duration metric: took 2.570121ms for default service account to be created ...
	I0915 06:41:44.995278 2523870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:41:45.005688 2523870 system_pods.go:86] 18 kube-system pods found
	I0915 06:41:45.005731 2523870 system_pods.go:89] "coredns-7c65d6cfc9-7vkbz" [6ea47236-17f3-4492-8780-9ad56187f489] Running
	I0915 06:41:45.005739 2523870 system_pods.go:89] "csi-hostpath-attacher-0" [fbcdc315-eaad-4112-a529-eec22f5f7dce] Running
	I0915 06:41:45.005745 2523870 system_pods.go:89] "csi-hostpath-resizer-0" [f5efb463-f551-4dde-87d2-5ec91a566e81] Running
	I0915 06:41:45.005749 2523870 system_pods.go:89] "csi-hostpathplugin-cgcjb" [58bfa35e-116a-45b1-a414-47dadde393c6] Running
	I0915 06:41:45.005753 2523870 system_pods.go:89] "etcd-addons-078133" [b238897b-6598-4d41-915c-57e032f1b6ad] Running
	I0915 06:41:45.005758 2523870 system_pods.go:89] "kindnet-h6zsk" [9c090aa0-3e32-475a-9090-5423f0449354] Running
	I0915 06:41:45.005762 2523870 system_pods.go:89] "kube-apiserver-addons-078133" [9606256f-7a4c-47eb-91e3-29271e631613] Running
	I0915 06:41:45.005766 2523870 system_pods.go:89] "kube-controller-manager-addons-078133" [fa465a0e-97b0-4d5f-af33-a26dbf7e3985] Running
	I0915 06:41:45.005771 2523870 system_pods.go:89] "kube-ingress-dns-minikube" [d0b76b7a-1b79-4a7d-9ee3-3ceb46aa75f6] Running
	I0915 06:41:45.005776 2523870 system_pods.go:89] "kube-proxy-fjj4k" [be724ff8-b220-4bfb-961c-c6cf462d9ddc] Running
	I0915 06:41:45.005780 2523870 system_pods.go:89] "kube-scheduler-addons-078133" [8a13493f-2796-4a2e-b83b-2f5f8f4f09bb] Running
	I0915 06:41:45.005785 2523870 system_pods.go:89] "metrics-server-84c5f94fbc-gfw99" [8d80d558-0f92-43df-9e1e-035dad596039] Running
	I0915 06:41:45.005792 2523870 system_pods.go:89] "nvidia-device-plugin-daemonset-cwx62" [6bc66e81-1049-45ef-b236-d0ad12ba82cf] Running
	I0915 06:41:45.005797 2523870 system_pods.go:89] "registry-66c9cd494c-dvjjx" [f6332eec-8451-4a18-b1e4-899a9c98a398] Running
	I0915 06:41:45.005801 2523870 system_pods.go:89] "registry-proxy-pph5w" [5bfdb7e0-869e-409d-b185-7e7c0d0386d6] Running
	I0915 06:41:45.005805 2523870 system_pods.go:89] "snapshot-controller-56fcc65765-6lsdb" [40abaaf0-851b-4368-bb6c-c43e5fd96b18] Running
	I0915 06:41:45.005811 2523870 system_pods.go:89] "snapshot-controller-56fcc65765-9dh55" [aac62e95-b572-45ce-ba9b-5b4451c8578b] Running
	I0915 06:41:45.005815 2523870 system_pods.go:89] "storage-provisioner" [30881b3f-dd6b-47c6-8171-db912be01758] Running
	I0915 06:41:45.005824 2523870 system_pods.go:126] duration metric: took 10.539108ms to wait for k8s-apps to be running ...
	I0915 06:41:45.005833 2523870 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:41:45.005903 2523870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:41:45.040231 2523870 system_svc.go:56] duration metric: took 34.383305ms WaitForService to wait for kubelet
	I0915 06:41:45.041762 2523870 kubeadm.go:582] duration metric: took 2m36.001781462s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:41:45.041984 2523870 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:41:45.049036 2523870 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 06:41:45.055344 2523870 node_conditions.go:123] node cpu capacity is 2
	I0915 06:41:45.061556 2523870 node_conditions.go:105] duration metric: took 17.573916ms to run NodePressure ...
	I0915 06:41:45.061585 2523870 start.go:241] waiting for startup goroutines ...
	I0915 06:41:45.061593 2523870 start.go:246] waiting for cluster config update ...
	I0915 06:41:45.061614 2523870 start.go:255] writing updated cluster config ...
	I0915 06:41:45.061999 2523870 ssh_runner.go:195] Run: rm -f paused
	I0915 06:41:45.465387 2523870 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:41:45.468637 2523870 out.go:177] * Done! kubectl is now configured to use "addons-078133" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 06:55:20 addons-078133 crio[962]: time="2024-09-15 06:55:20.242868150Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f7846aa1-78ab-438a-89ce-fac3f83832ea name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:55:31 addons-078133 crio[962]: time="2024-09-15 06:55:31.243027810Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b2db79ee-229b-4503-928e-62b5f77d3886 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:55:31 addons-078133 crio[962]: time="2024-09-15 06:55:31.243269133Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b2db79ee-229b-4503-928e-62b5f77d3886 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:55:45 addons-078133 crio[962]: time="2024-09-15 06:55:45.242818613Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cbaa3d13-f9a3-46f9-8c3c-c1464a596b59 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:55:45 addons-078133 crio[962]: time="2024-09-15 06:55:45.243092149Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cbaa3d13-f9a3-46f9-8c3c-c1464a596b59 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:56:00 addons-078133 crio[962]: time="2024-09-15 06:56:00.247368083Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7df35955-05c1-4585-8a40-927420877670 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:56:00 addons-078133 crio[962]: time="2024-09-15 06:56:00.247669080Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7df35955-05c1-4585-8a40-927420877670 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:56:12 addons-078133 crio[962]: time="2024-09-15 06:56:12.243548247Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b2bc61e0-1174-4234-aac7-ba9e8c007e7d name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:56:12 addons-078133 crio[962]: time="2024-09-15 06:56:12.243802731Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b2bc61e0-1174-4234-aac7-ba9e8c007e7d name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:56:23 addons-078133 crio[962]: time="2024-09-15 06:56:23.242829125Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5e34fe3b-c238-4a74-9303-946bdb039431 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:56:23 addons-078133 crio[962]: time="2024-09-15 06:56:23.243082345Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5e34fe3b-c238-4a74-9303-946bdb039431 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:56:35 addons-078133 crio[962]: time="2024-09-15 06:56:35.243059156Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=74e635d3-cafd-47a6-a992-84312635a56b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:56:35 addons-078133 crio[962]: time="2024-09-15 06:56:35.243307626Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=74e635d3-cafd-47a6-a992-84312635a56b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:56:48 addons-078133 crio[962]: time="2024-09-15 06:56:48.243109775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b0ab343d-823c-4f0d-920a-33f087f3a872 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:56:48 addons-078133 crio[962]: time="2024-09-15 06:56:48.243356718Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b0ab343d-823c-4f0d-920a-33f087f3a872 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:57:02 addons-078133 crio[962]: time="2024-09-15 06:57:02.242290288Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7549e796-74de-4cd7-a3d0-3efdc60ea28b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:57:02 addons-078133 crio[962]: time="2024-09-15 06:57:02.242533975Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7549e796-74de-4cd7-a3d0-3efdc60ea28b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:57:13 addons-078133 crio[962]: time="2024-09-15 06:57:13.242939907Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9416cf03-94da-47ae-8ba6-0fdea5e30ade name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:57:13 addons-078133 crio[962]: time="2024-09-15 06:57:13.243201644Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9416cf03-94da-47ae-8ba6-0fdea5e30ade name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:57:22 addons-078133 crio[962]: time="2024-09-15 06:57:22.065094136Z" level=info msg="Stopping container: c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33 (timeout: 30s)" id=bbcb4f00-9825-4d18-a62d-8bd8e5bb5d74 name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:57:23 addons-078133 crio[962]: time="2024-09-15 06:57:23.268298471Z" level=info msg="Stopped container c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33: kube-system/metrics-server-84c5f94fbc-gfw99/metrics-server" id=bbcb4f00-9825-4d18-a62d-8bd8e5bb5d74 name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:57:23 addons-078133 crio[962]: time="2024-09-15 06:57:23.269321215Z" level=info msg="Stopping pod sandbox: 6b2883d632ffa3bcf47f3139e35f7453e25380b995b4c550a9f5d813366c55fd" id=a750e6e1-3a31-40f3-9025-12d53a0251d1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:57:23 addons-078133 crio[962]: time="2024-09-15 06:57:23.269550920Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-gfw99 Namespace:kube-system ID:6b2883d632ffa3bcf47f3139e35f7453e25380b995b4c550a9f5d813366c55fd UID:8d80d558-0f92-43df-9e1e-035dad596039 NetNS:/var/run/netns/79556a3f-d96a-4504-aec5-aa6cfe5830b3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:57:23 addons-078133 crio[962]: time="2024-09-15 06:57:23.269686671Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-gfw99 from CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:57:23 addons-078133 crio[962]: time="2024-09-15 06:57:23.337212311Z" level=info msg="Stopped pod sandbox: 6b2883d632ffa3bcf47f3139e35f7453e25380b995b4c550a9f5d813366c55fd" id=a750e6e1-3a31-40f3-9025-12d53a0251d1 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	970298acdf1fc       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   1a7955266a785       hello-world-app-55bf9c44b4-prp58
	406c2b057a5bb       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                         5 minutes ago       Running             nginx                     0                   5e8cffae4ca3c       nginx
	0827a067b0cde       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            16 minutes ago      Running             gcp-auth                  0                   0dde73874d0cd       gcp-auth-89d5ffd79-dfdjh
	c1c95dfa2a499       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   17 minutes ago      Exited              metrics-server            0                   6b2883d632ffa       metrics-server-84c5f94fbc-gfw99
	d271b7f778ca6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        17 minutes ago      Running             storage-provisioner       0                   e16867b58e664       storage-provisioner
	85daa7360e5e9       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        17 minutes ago      Running             coredns                   0                   9ab5526bc1400       coredns-7c65d6cfc9-7vkbz
	0dd8f2e1d527f       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        18 minutes ago      Running             kindnet-cni               0                   4ab45f1d528e9       kindnet-h6zsk
	7effe62b4c9a3       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        18 minutes ago      Running             kube-proxy                0                   519d37d41f025       kube-proxy-fjj4k
	e96ddc5409269       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        18 minutes ago      Running             kube-apiserver            0                   1b90d84bbc3b0       kube-apiserver-addons-078133
	9b04df1237c35       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        18 minutes ago      Running             kube-scheduler            0                   5bcd311de4186       kube-scheduler-addons-078133
	fc20989b36b93       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        18 minutes ago      Running             kube-controller-manager   0                   37863f70ae7a4       kube-controller-manager-addons-078133
	aa1f1d2a843d0       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        18 minutes ago      Running             etcd                      0                   037f467425e39       etcd-addons-078133
	
	
	==> coredns [85daa7360e5e9fa13403432b75462cbe802220b1691e4a2d9a8e8848e0c6882c] <==
	[INFO] 10.244.0.7:60956 - 40381 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116937s
	[INFO] 10.244.0.7:45161 - 29366 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002240627s
	[INFO] 10.244.0.7:45161 - 32945 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003202302s
	[INFO] 10.244.0.7:37659 - 38912 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000204787s
	[INFO] 10.244.0.7:37659 - 18694 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141732s
	[INFO] 10.244.0.7:46398 - 25256 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000213993s
	[INFO] 10.244.0.7:46398 - 24995 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00027443s
	[INFO] 10.244.0.7:47479 - 52991 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072909s
	[INFO] 10.244.0.7:47479 - 46333 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005142s
	[INFO] 10.244.0.7:49213 - 1338 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005339s
	[INFO] 10.244.0.7:49213 - 49467 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072876s
	[INFO] 10.244.0.7:42802 - 41891 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00141695s
	[INFO] 10.244.0.7:42802 - 39841 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001484666s
	[INFO] 10.244.0.7:38900 - 44116 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066592s
	[INFO] 10.244.0.7:38900 - 30299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116486s
	[INFO] 10.244.0.19:47931 - 25633 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002470447s
	[INFO] 10.244.0.19:33148 - 45348 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002478143s
	[INFO] 10.244.0.19:56417 - 22070 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147508s
	[INFO] 10.244.0.19:50454 - 60030 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133371s
	[INFO] 10.244.0.19:42936 - 16948 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128678s
	[INFO] 10.244.0.19:52660 - 34977 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125519s
	[INFO] 10.244.0.19:59020 - 55342 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003112933s
	[INFO] 10.244.0.19:49810 - 53119 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003366441s
	[INFO] 10.244.0.19:56751 - 42495 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005208407s
	[INFO] 10.244.0.19:42362 - 42298 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.005481853s
	
	
	==> describe nodes <==
	Name:               addons-078133
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-078133
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-078133
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_39_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-078133
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:39:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-078133
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:57:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:54:43 +0000   Sun, 15 Sep 2024 06:38:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:54:43 +0000   Sun, 15 Sep 2024 06:38:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:54:43 +0000   Sun, 15 Sep 2024 06:38:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:54:43 +0000   Sun, 15 Sep 2024 06:39:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-078133
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd8b84dea15e4d35b14dc406bd3d7d26
	  System UUID:                a2ace0dd-aa7e-4476-816d-37514df39de9
	  Boot ID:                    86c781ec-01d2-4efb-aba1-a43f302ac663
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-world-app-55bf9c44b4-prp58         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  gcp-auth                    gcp-auth-89d5ffd79-dfdjh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-7c65d6cfc9-7vkbz                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     18m
	  kube-system                 etcd-addons-078133                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kindnet-h6zsk                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-addons-078133             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-addons-078133    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-fjj4k                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-addons-078133             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node addons-078133 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node addons-078133 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node addons-078133 status is now: NodeHasSufficientPID
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  18m                kubelet          Node addons-078133 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m                kubelet          Node addons-078133 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m                kubelet          Node addons-078133 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           18m                node-controller  Node addons-078133 event: Registered Node addons-078133 in Controller
	  Normal   NodeReady                17m                kubelet          Node addons-078133 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep15 05:34] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000091 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001089] FS-Cache: O-cookie d=000000009ec4a1b9{9P.session} n=00000000933e989b
	[  +0.001105] FS-Cache: O-key=[10] '34333036383438313233'
	[  +0.000796] FS-Cache: N-cookie c=00000092 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000965] FS-Cache: N-cookie d=000000009ec4a1b9{9P.session} n=00000000c50af53f
	[  +0.001363] FS-Cache: N-key=[10] '34333036383438313233'
	[Sep15 06:08] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [aa1f1d2a843d0c23480fce71db4c503b2e8964374e04dae157367e6852c9bbf6] <==
	{"level":"info","ts":"2024-09-15T06:38:58.060337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-15T06:38:58.060369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T06:38:58.065025Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-078133 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:38:58.065273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:38:58.065678Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.068367Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:38:58.068608Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:38:58.068687Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:38:58.069414Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:38:58.070446Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-15T06:38:58.073106Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.073273Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.088962Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:38:58.089741Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:38:58.090677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:39:10.078651Z","caller":"traceutil/trace.go:171","msg":"trace[978204264] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"138.849688ms","start":"2024-09-15T06:39:09.939783Z","end":"2024-09-15T06:39:10.078632Z","steps":["trace[978204264] 'process raft request'  (duration: 95.382705ms)","trace[978204264] 'compare'  (duration: 42.981654ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:39:13.438537Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.182536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:39:13.438634Z","caller":"traceutil/trace.go:171","msg":"trace[1902515032] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:440; }","duration":"112.30017ms","start":"2024-09-15T06:39:13.326320Z","end":"2024-09-15T06:39:13.438620Z","steps":["trace[1902515032] 'agreement among raft nodes before linearized reading'  (duration: 83.629989ms)","trace[1902515032] 'range keys from in-memory index tree'  (duration: 28.533716ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:39:51.757080Z","caller":"traceutil/trace.go:171","msg":"trace[1907155975] transaction","detail":"{read_only:false; response_revision:896; number_of_response:1; }","duration":"103.53271ms","start":"2024-09-15T06:39:51.653528Z","end":"2024-09-15T06:39:51.757061Z","steps":["trace[1907155975] 'process raft request'  (duration: 79.5189ms)","trace[1907155975] 'compare'  (duration: 23.406243ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:48:58.204333Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1530}
	{"level":"info","ts":"2024-09-15T06:48:58.238285Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1530,"took":"33.495045ms","hash":3104697584,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3293184,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-15T06:48:58.238443Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3104697584,"revision":1530,"compact-revision":-1}
	{"level":"info","ts":"2024-09-15T06:53:58.209201Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1948}
	{"level":"info","ts":"2024-09-15T06:53:58.225434Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1948,"took":"15.604764ms","hash":4226942108,"current-db-size-bytes":6336512,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4395008,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-15T06:53:58.225557Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4226942108,"revision":1948,"compact-revision":1530}
	
	
	==> gcp-auth [0827a067b0cde94dfdfe774133d38b55169c16cd00de8fa5c926fac9c7c30441] <==
	2024/09/15 06:41:45 Ready to write response ...
	2024/09/15 06:41:46 Ready to marshal response ...
	2024/09/15 06:41:46 Ready to write response ...
	2024/09/15 06:49:53 Ready to marshal response ...
	2024/09/15 06:49:53 Ready to write response ...
	2024/09/15 06:50:00 Ready to marshal response ...
	2024/09/15 06:50:00 Ready to write response ...
	2024/09/15 06:50:20 Ready to marshal response ...
	2024/09/15 06:50:20 Ready to write response ...
	2024/09/15 06:50:54 Ready to marshal response ...
	2024/09/15 06:50:54 Ready to write response ...
	2024/09/15 06:50:55 Ready to marshal response ...
	2024/09/15 06:50:55 Ready to write response ...
	2024/09/15 06:51:03 Ready to marshal response ...
	2024/09/15 06:51:03 Ready to write response ...
	2024/09/15 06:51:11 Ready to marshal response ...
	2024/09/15 06:51:11 Ready to write response ...
	2024/09/15 06:51:11 Ready to marshal response ...
	2024/09/15 06:51:11 Ready to write response ...
	2024/09/15 06:51:11 Ready to marshal response ...
	2024/09/15 06:51:11 Ready to write response ...
	2024/09/15 06:52:00 Ready to marshal response ...
	2024/09/15 06:52:00 Ready to write response ...
	2024/09/15 06:54:21 Ready to marshal response ...
	2024/09/15 06:54:21 Ready to write response ...
	
	
	==> kernel <==
	 06:57:24 up 14:39,  0 users,  load average: 0.13, 0.32, 1.00
	Linux addons-078133 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0dd8f2e1d527f20f3c9edc9927ea2d371d42ade69836eccc743f726120922725] <==
	I0915 06:55:20.836989       1 main.go:299] handling current node
	I0915 06:55:30.839314       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:55:30.839475       1 main.go:299] handling current node
	I0915 06:55:40.836997       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:55:40.837129       1 main.go:299] handling current node
	I0915 06:55:50.837038       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:55:50.837072       1 main.go:299] handling current node
	I0915 06:56:00.842709       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:56:00.842824       1 main.go:299] handling current node
	I0915 06:56:10.837005       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:56:10.837042       1 main.go:299] handling current node
	I0915 06:56:20.837040       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:56:20.837073       1 main.go:299] handling current node
	I0915 06:56:30.838894       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:56:30.838928       1 main.go:299] handling current node
	I0915 06:56:40.837013       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:56:40.837051       1 main.go:299] handling current node
	I0915 06:56:50.838568       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:56:50.838601       1 main.go:299] handling current node
	I0915 06:57:00.838521       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:57:00.838557       1 main.go:299] handling current node
	I0915 06:57:10.836532       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:57:10.836678       1 main.go:299] handling current node
	I0915 06:57:20.839330       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:57:20.839368       1 main.go:299] handling current node
	
	
	==> kube-apiserver [e96ddc5409269b6fcd6d48967781269412a1b24ca020f68a08b841d477f748a6] <==
	E0915 06:50:29.246052       1 watch.go:250] "Unhandled Error" err="write tcp 192.168.49.2:8443->10.244.0.13:46336: write: connection reset by peer" logger="UnhandledError"
	I0915 06:50:35.680485       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.680547       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.774314       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.774371       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.811502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.811566       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.819471       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.820168       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:50:35.950749       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:50:35.950798       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:50:36.819999       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:50:36.951215       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0915 06:50:36.956283       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0915 06:51:05.884183       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:51:05.894579       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0915 06:51:05.905669       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0915 06:51:11.606105       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.7.251"}
	E0915 06:51:20.905697       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0915 06:51:54.780427       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 06:51:55.812908       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0915 06:52:00.747011       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0915 06:52:01.077622       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.12.1"}
	I0915 06:54:22.043767       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.230.72"}
	E0915 06:54:24.513992       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [fc20989b36b93fa8df92649de6995aae470778c2defc6000aa06bfaf1a8aebb1] <==
	W0915 06:54:52.389610       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:54:52.389664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:55:08.711041       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:55:08.711094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:55:26.161805       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:55:26.161849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:55:28.786380       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:55:28.786425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:55:48.087706       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:55:48.087758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:55:53.346918       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:55:53.346969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:56:02.864480       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:56:02.864528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:56:20.394424       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:56:20.394562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:56:35.878144       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:56:35.878191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:56:36.852742       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:56:36.852785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:56:47.868384       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:56:47.868427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:57:13.567464       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:57:13.567509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:57:22.035921       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="4.988µs"
	
	
	==> kube-proxy [7effe62b4c9a37f021f11234b005d35070c18d30acdd93b874fb1b67918c7dee] <==
	I0915 06:39:13.431040       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:39:14.654548       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:39:14.654733       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:39:14.806709       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:39:14.806853       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:39:14.809136       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:39:14.809744       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:39:14.809813       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:39:14.834509       1 config.go:199] "Starting service config controller"
	I0915 06:39:14.847771       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:39:14.854180       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:39:14.881895       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:39:14.861657       1 config.go:328] "Starting node config controller"
	I0915 06:39:14.882892       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:39:14.982166       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:39:14.985602       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:39:14.987423       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9b04df1237c35352707d04f4c87efed8ba791cef59cac718b2a6053d4fe3e159] <==
	W0915 06:39:02.337994       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0915 06:39:02.338097       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:39:02.340793       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 06:39:02.338171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:39:02.340988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 06:39:02.341068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338326       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:39:02.341150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 06:39:02.341224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:39:02.341315       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.338549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:39:02.341387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:39:02.341464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:39:02.341546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:39:02.341632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:39:02.340415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 06:39:02.341721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0915 06:39:02.339535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0915 06:39:03.627072       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:56:44 addons-078133 kubelet[1502]: E0915 06:56:44.644799    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383404644513636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:56:44 addons-078133 kubelet[1502]: E0915 06:56:44.644868    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383404644513636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:56:48 addons-078133 kubelet[1502]: E0915 06:56:48.243930    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="118abc58-e4e4-4fbe-a031-20b040e86f27"
	Sep 15 06:56:54 addons-078133 kubelet[1502]: E0915 06:56:54.647614    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383414647339788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:56:54 addons-078133 kubelet[1502]: E0915 06:56:54.647652    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383414647339788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:57:02 addons-078133 kubelet[1502]: E0915 06:57:02.242953    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="118abc58-e4e4-4fbe-a031-20b040e86f27"
	Sep 15 06:57:04 addons-078133 kubelet[1502]: E0915 06:57:04.650357    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383424650084362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:57:04 addons-078133 kubelet[1502]: E0915 06:57:04.650399    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383424650084362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:57:13 addons-078133 kubelet[1502]: E0915 06:57:13.243432    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="118abc58-e4e4-4fbe-a031-20b040e86f27"
	Sep 15 06:57:14 addons-078133 kubelet[1502]: E0915 06:57:14.653833    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383434653523604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:57:14 addons-078133 kubelet[1502]: E0915 06:57:14.653876    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383434653523604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:57:23 addons-078133 kubelet[1502]: I0915 06:57:23.449104    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d80d558-0f92-43df-9e1e-035dad596039-tmp-dir\") pod \"8d80d558-0f92-43df-9e1e-035dad596039\" (UID: \"8d80d558-0f92-43df-9e1e-035dad596039\") "
	Sep 15 06:57:23 addons-078133 kubelet[1502]: I0915 06:57:23.449166    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wrp5\" (UniqueName: \"kubernetes.io/projected/8d80d558-0f92-43df-9e1e-035dad596039-kube-api-access-2wrp5\") pod \"8d80d558-0f92-43df-9e1e-035dad596039\" (UID: \"8d80d558-0f92-43df-9e1e-035dad596039\") "
	Sep 15 06:57:23 addons-078133 kubelet[1502]: I0915 06:57:23.449765    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d80d558-0f92-43df-9e1e-035dad596039-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "8d80d558-0f92-43df-9e1e-035dad596039" (UID: "8d80d558-0f92-43df-9e1e-035dad596039"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 15 06:57:23 addons-078133 kubelet[1502]: I0915 06:57:23.453032    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d80d558-0f92-43df-9e1e-035dad596039-kube-api-access-2wrp5" (OuterVolumeSpecName: "kube-api-access-2wrp5") pod "8d80d558-0f92-43df-9e1e-035dad596039" (UID: "8d80d558-0f92-43df-9e1e-035dad596039"). InnerVolumeSpecName "kube-api-access-2wrp5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:57:23 addons-078133 kubelet[1502]: I0915 06:57:23.550379    1502 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d80d558-0f92-43df-9e1e-035dad596039-tmp-dir\") on node \"addons-078133\" DevicePath \"\""
	Sep 15 06:57:23 addons-078133 kubelet[1502]: I0915 06:57:23.550420    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2wrp5\" (UniqueName: \"kubernetes.io/projected/8d80d558-0f92-43df-9e1e-035dad596039-kube-api-access-2wrp5\") on node \"addons-078133\" DevicePath \"\""
	Sep 15 06:57:24 addons-078133 kubelet[1502]: I0915 06:57:24.071679    1502 scope.go:117] "RemoveContainer" containerID="c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33"
	Sep 15 06:57:24 addons-078133 kubelet[1502]: I0915 06:57:24.103653    1502 scope.go:117] "RemoveContainer" containerID="c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33"
	Sep 15 06:57:24 addons-078133 kubelet[1502]: E0915 06:57:24.104039    1502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33\": container with ID starting with c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33 not found: ID does not exist" containerID="c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33"
	Sep 15 06:57:24 addons-078133 kubelet[1502]: I0915 06:57:24.104071    1502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33"} err="failed to get container status \"c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33\": rpc error: code = NotFound desc = could not find container \"c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33\": container with ID starting with c1c95dfa2a49932d3af3c69e52b35d6b93909c494e790033500d086ce03b0c33 not found: ID does not exist"
	Sep 15 06:57:24 addons-078133 kubelet[1502]: E0915 06:57:24.244046    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="118abc58-e4e4-4fbe-a031-20b040e86f27"
	Sep 15 06:57:24 addons-078133 kubelet[1502]: I0915 06:57:24.249954    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d80d558-0f92-43df-9e1e-035dad596039" path="/var/lib/kubelet/pods/8d80d558-0f92-43df-9e1e-035dad596039/volumes"
	Sep 15 06:57:24 addons-078133 kubelet[1502]: E0915 06:57:24.657825    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383444655925189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:57:24 addons-078133 kubelet[1502]: E0915 06:57:24.657870    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383444655925189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572279,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [d271b7f778ca6a5e43c6790e874afaf722384211e819eedb0f87091dcf8bb3ca] <==
	I0915 06:39:51.876457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:39:52.092367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:39:52.122251       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:39:52.141776       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:39:52.142096       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-078133_b714d925-ab44-41be-bcf1-c4695a08fcc2!
	I0915 06:39:52.143415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c1414a91-3bba-456a-9087-6984d4f1a1e5", APIVersion:"v1", ResourceVersion:"932", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-078133_b714d925-ab44-41be-bcf1-c4695a08fcc2 became leader
	I0915 06:39:52.243076       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-078133_b714d925-ab44-41be-bcf1-c4695a08fcc2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-078133 -n addons-078133
helpers_test.go:261: (dbg) Run:  kubectl --context addons-078133 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-078133 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-078133 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-078133/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:41:45 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x9nfs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x9nfs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/busybox to addons-078133
	  Normal   Pulling    14m (x4 over 15m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     14m (x4 over 15m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     14m (x4 over 15m)   kubelet            Error: ErrImagePull
	  Warning  Failed     13m (x6 over 15m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    38s (x64 over 15m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (357.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (127.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-985632 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-985632 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m3.087202576s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-985632       NotReady   control-plane   12m     v1.31.1
	ha-985632-m02   Ready      control-plane   12m     v1.31.1
	ha-985632-m04   Ready      <none>          9m58s   v1.31.1

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-985632
helpers_test.go:235: (dbg) docker inspect ha-985632:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "473137b9a5acd89e90906d74264015a7d04e6af747aa23db7af2a966f4e17226",
	        "Created": "2024-09-15T07:02:15.756471418Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2584515,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T07:13:24.922124166Z",
	            "FinishedAt": "2024-09-15T07:13:23.992537651Z"
	        },
	        "Image": "sha256:a1b71fa87733590eb4674b16f6945626ae533f3af37066893e3fd70eb9476268",
	        "ResolvConfPath": "/var/lib/docker/containers/473137b9a5acd89e90906d74264015a7d04e6af747aa23db7af2a966f4e17226/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/473137b9a5acd89e90906d74264015a7d04e6af747aa23db7af2a966f4e17226/hostname",
	        "HostsPath": "/var/lib/docker/containers/473137b9a5acd89e90906d74264015a7d04e6af747aa23db7af2a966f4e17226/hosts",
	        "LogPath": "/var/lib/docker/containers/473137b9a5acd89e90906d74264015a7d04e6af747aa23db7af2a966f4e17226/473137b9a5acd89e90906d74264015a7d04e6af747aa23db7af2a966f4e17226-json.log",
	        "Name": "/ha-985632",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-985632:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-985632",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1585e1589f1c7866535106ae3a6947040511076cd7e7793a57c18c5dbad459a6-init/diff:/var/lib/docker/overlay2/72792481ba3fe11d67c9c5bebed6121eb09dffa903ddf816dfb06e703f2d9d5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1585e1589f1c7866535106ae3a6947040511076cd7e7793a57c18c5dbad459a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1585e1589f1c7866535106ae3a6947040511076cd7e7793a57c18c5dbad459a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1585e1589f1c7866535106ae3a6947040511076cd7e7793a57c18c5dbad459a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-985632",
	                "Source": "/var/lib/docker/volumes/ha-985632/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-985632",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-985632",
	                "name.minikube.sigs.k8s.io": "ha-985632",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c0dae374b13862a93ddf4891ad87deb210ddad5df84e1dae4c8a7a82221e0885",
	            "SandboxKey": "/var/run/docker/netns/c0dae374b138",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-985632": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2d0e93ba5acbe9105ecc5166de806f91c7fa9f5daf6147605ef070e06f2aae1e",
	                    "EndpointID": "c9bbe5a8ac0f93d98405172d3b966f7e8db3d233f1b95b5d840116ea7f396e80",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-985632",
	                        "473137b9a5ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-985632 -n ha-985632
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-985632 logs -n 25: (2.002912989s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-985632 cp ha-985632-m03:/home/docker/cp-test.txt                              | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632-m04:/home/docker/cp-test_ha-985632-m03_ha-985632-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-985632 ssh -n                                                                 | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-985632 ssh -n ha-985632-m04 sudo cat                                          | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-985632-m03_ha-985632-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-985632 cp testdata/cp-test.txt                                                | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-985632 ssh -n                                                                 | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-985632 cp ha-985632-m04:/home/docker/cp-test.txt                              | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3739879315/001/cp-test_ha-985632-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-985632 ssh -n                                                                 | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-985632 cp ha-985632-m04:/home/docker/cp-test.txt                              | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632:/home/docker/cp-test_ha-985632-m04_ha-985632.txt                       |           |         |         |                     |                     |
	| ssh     | ha-985632 ssh -n                                                                 | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-985632 ssh -n ha-985632 sudo cat                                              | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-985632-m04_ha-985632.txt                                 |           |         |         |                     |                     |
	| cp      | ha-985632 cp ha-985632-m04:/home/docker/cp-test.txt                              | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632-m02:/home/docker/cp-test_ha-985632-m04_ha-985632-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-985632 ssh -n                                                                 | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-985632 ssh -n ha-985632-m02 sudo cat                                          | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-985632-m04_ha-985632-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-985632 cp ha-985632-m04:/home/docker/cp-test.txt                              | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632-m03:/home/docker/cp-test_ha-985632-m04_ha-985632-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-985632 ssh -n                                                                 | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | ha-985632-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-985632 ssh -n ha-985632-m03 sudo cat                                          | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | /home/docker/cp-test_ha-985632-m04_ha-985632-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-985632 node stop m02 -v=7                                                     | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-985632 node start m02 -v=7                                                    | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:06 UTC | 15 Sep 24 07:07 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-985632 -v=7                                                           | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-985632 -v=7                                                                | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:07 UTC | 15 Sep 24 07:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-985632 --wait=true -v=7                                                    | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:08 UTC | 15 Sep 24 07:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-985632                                                                | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:12 UTC |                     |
	| node    | ha-985632 node delete m03 -v=7                                                   | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:12 UTC | 15 Sep 24 07:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-985632 stop -v=7                                                              | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:12 UTC | 15 Sep 24 07:13 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-985632 --wait=true                                                         | ha-985632 | jenkins | v1.34.0 | 15 Sep 24 07:13 UTC | 15 Sep 24 07:15 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 07:13:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 07:13:24.431573 2584312 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:13:24.431779 2584312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:13:24.431793 2584312 out.go:358] Setting ErrFile to fd 2...
	I0915 07:13:24.431799 2584312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:13:24.432097 2584312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 07:13:24.432534 2584312 out.go:352] Setting JSON to false
	I0915 07:13:24.433577 2584312 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":53755,"bootTime":1726330649,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0915 07:13:24.433745 2584312 start.go:139] virtualization:  
	I0915 07:13:24.437293 2584312 out.go:177] * [ha-985632] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 07:13:24.440882 2584312 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:13:24.440994 2584312 notify.go:220] Checking for updates...
	I0915 07:13:24.447248 2584312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:13:24.450056 2584312 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 07:13:24.452876 2584312 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	I0915 07:13:24.455549 2584312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 07:13:24.458370 2584312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:13:24.461592 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:13:24.462122 2584312 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:13:24.493648 2584312 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 07:13:24.493769 2584312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:13:24.549580 2584312 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-15 07:13:24.540276211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 07:13:24.549696 2584312 docker.go:318] overlay module found
	I0915 07:13:24.554286 2584312 out.go:177] * Using the docker driver based on existing profile
	I0915 07:13:24.556887 2584312 start.go:297] selected driver: docker
	I0915 07:13:24.556910 2584312 start.go:901] validating driver "docker" against &{Name:ha-985632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-985632 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:13:24.557079 2584312 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:13:24.557190 2584312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:13:24.611788 2584312 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-15 07:13:24.601234368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 07:13:24.612248 2584312 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:13:24.612282 2584312 cni.go:84] Creating CNI manager for ""
	I0915 07:13:24.612324 2584312 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0915 07:13:24.612375 2584312 start.go:340] cluster config:
	{Name:ha-985632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-985632 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0915 07:13:24.616864 2584312 out.go:177] * Starting "ha-985632" primary control-plane node in "ha-985632" cluster
	I0915 07:13:24.619349 2584312 cache.go:121] Beginning downloading kic base image for docker with crio
	I0915 07:13:24.622003 2584312 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 07:13:24.624575 2584312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:13:24.624635 2584312 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0915 07:13:24.624648 2584312 cache.go:56] Caching tarball of preloaded images
	I0915 07:13:24.624663 2584312 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 07:13:24.624738 2584312 preload.go:172] Found /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0915 07:13:24.624749 2584312 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:13:24.625087 2584312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/config.json ...
	W0915 07:13:24.644224 2584312 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0915 07:13:24.644246 2584312 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 07:13:24.644340 2584312 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 07:13:24.644365 2584312 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 07:13:24.644370 2584312 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 07:13:24.644378 2584312 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 07:13:24.644384 2584312 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 07:13:24.645789 2584312 image.go:273] response: 
	I0915 07:13:24.771103 2584312 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 07:13:24.771146 2584312 cache.go:194] Successfully downloaded all kic artifacts
	I0915 07:13:24.771203 2584312 start.go:360] acquireMachinesLock for ha-985632: {Name:mk1d9005690304f3688525447c7de81feddb10fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:13:24.771280 2584312 start.go:364] duration metric: took 47.285µs to acquireMachinesLock for "ha-985632"
	I0915 07:13:24.771309 2584312 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:13:24.771317 2584312 fix.go:54] fixHost starting: 
	I0915 07:13:24.771588 2584312 cli_runner.go:164] Run: docker container inspect ha-985632 --format={{.State.Status}}
	I0915 07:13:24.788523 2584312 fix.go:112] recreateIfNeeded on ha-985632: state=Stopped err=<nil>
	W0915 07:13:24.788554 2584312 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:13:24.791584 2584312 out.go:177] * Restarting existing docker container for "ha-985632" ...
	I0915 07:13:24.794372 2584312 cli_runner.go:164] Run: docker start ha-985632
	I0915 07:13:25.121885 2584312 cli_runner.go:164] Run: docker container inspect ha-985632 --format={{.State.Status}}
	I0915 07:13:25.140480 2584312 kic.go:430] container "ha-985632" state is running.
	I0915 07:13:25.140963 2584312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632
	I0915 07:13:25.170707 2584312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/config.json ...
	I0915 07:13:25.170970 2584312 machine.go:93] provisionDockerMachine start ...
	I0915 07:13:25.171039 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:13:25.196945 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:13:25.197218 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35808 <nil> <nil>}
	I0915 07:13:25.197230 2584312 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 07:13:25.198382 2584312 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0915 07:13:28.336553 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-985632
	
	I0915 07:13:28.336623 2584312 ubuntu.go:169] provisioning hostname "ha-985632"
	I0915 07:13:28.336699 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:13:28.354716 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:13:28.354981 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35808 <nil> <nil>}
	I0915 07:13:28.355008 2584312 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-985632 && echo "ha-985632" | sudo tee /etc/hostname
	I0915 07:13:28.505911 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-985632
	
	I0915 07:13:28.505994 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:13:28.523335 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:13:28.523589 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35808 <nil> <nil>}
	I0915 07:13:28.523614 2584312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-985632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-985632/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-985632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:13:28.665027 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:13:28.665052 2584312 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-2517725/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-2517725/.minikube}
	I0915 07:13:28.665091 2584312 ubuntu.go:177] setting up certificates
	I0915 07:13:28.665101 2584312 provision.go:84] configureAuth start
	I0915 07:13:28.665162 2584312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632
	I0915 07:13:28.681922 2584312 provision.go:143] copyHostCerts
	I0915 07:13:28.681967 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem
	I0915 07:13:28.682001 2584312 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem, removing ...
	I0915 07:13:28.682014 2584312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem
	I0915 07:13:28.682104 2584312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem (1675 bytes)
	I0915 07:13:28.682194 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem
	I0915 07:13:28.682221 2584312 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem, removing ...
	I0915 07:13:28.682228 2584312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem
	I0915 07:13:28.682258 2584312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem (1082 bytes)
	I0915 07:13:28.682347 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem
	I0915 07:13:28.682368 2584312 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem, removing ...
	I0915 07:13:28.682379 2584312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem
	I0915 07:13:28.682413 2584312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem (1123 bytes)
	I0915 07:13:28.682469 2584312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem org=jenkins.ha-985632 san=[127.0.0.1 192.168.49.2 ha-985632 localhost minikube]
	I0915 07:13:28.965460 2584312 provision.go:177] copyRemoteCerts
	I0915 07:13:28.965534 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:13:28.965586 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:13:28.982619 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35808 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632/id_rsa Username:docker}
	I0915 07:13:29.086681 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:13:29.086750 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:13:29.112174 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:13:29.112247 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0915 07:13:29.138227 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:13:29.138294 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 07:13:29.163135 2584312 provision.go:87] duration metric: took 498.008167ms to configureAuth
	I0915 07:13:29.163208 2584312 ubuntu.go:193] setting minikube options for container-runtime
	I0915 07:13:29.163480 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:13:29.163588 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:13:29.180126 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:13:29.180406 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35808 <nil> <nil>}
	I0915 07:13:29.180426 2584312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:13:29.600665 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:13:29.600753 2584312 machine.go:96] duration metric: took 4.429762617s to provisionDockerMachine
	I0915 07:13:29.600788 2584312 start.go:293] postStartSetup for "ha-985632" (driver="docker")
	I0915 07:13:29.600842 2584312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:13:29.600952 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:13:29.601038 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:13:29.628471 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35808 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632/id_rsa Username:docker}
	I0915 07:13:29.729936 2584312 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:13:29.733135 2584312 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 07:13:29.733176 2584312 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 07:13:29.733188 2584312 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 07:13:29.733195 2584312 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 07:13:29.733213 2584312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/addons for local assets ...
	I0915 07:13:29.733276 2584312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/files for local assets ...
	I0915 07:13:29.733360 2584312 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem -> 25231162.pem in /etc/ssl/certs
	I0915 07:13:29.733372 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem -> /etc/ssl/certs/25231162.pem
	I0915 07:13:29.733478 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:13:29.742266 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem --> /etc/ssl/certs/25231162.pem (1708 bytes)
	I0915 07:13:29.766517 2584312 start.go:296] duration metric: took 165.653884ms for postStartSetup
	I0915 07:13:29.766605 2584312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:13:29.766654 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:13:29.784168 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35808 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632/id_rsa Username:docker}
	I0915 07:13:29.878096 2584312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 07:13:29.882846 2584312 fix.go:56] duration metric: took 5.111519226s for fixHost
	I0915 07:13:29.882872 2584312 start.go:83] releasing machines lock for "ha-985632", held for 5.111577793s
	I0915 07:13:29.882946 2584312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632
	I0915 07:13:29.899545 2584312 ssh_runner.go:195] Run: cat /version.json
	I0915 07:13:29.899598 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:13:29.899864 2584312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:13:29.899932 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:13:29.917999 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35808 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632/id_rsa Username:docker}
	I0915 07:13:29.926515 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35808 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632/id_rsa Username:docker}
	I0915 07:13:30.019934 2584312 ssh_runner.go:195] Run: systemctl --version
	I0915 07:13:30.179586 2584312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:13:30.333328 2584312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 07:13:30.338090 2584312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:13:30.347966 2584312 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0915 07:13:30.348048 2584312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:13:30.357737 2584312 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0915 07:13:30.357763 2584312 start.go:495] detecting cgroup driver to use...
	I0915 07:13:30.357798 2584312 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 07:13:30.357852 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:13:30.370531 2584312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:13:30.382495 2584312 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:13:30.382566 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:13:30.396019 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:13:30.408236 2584312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:13:30.498439 2584312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:13:30.591355 2584312 docker.go:233] disabling docker service ...
	I0915 07:13:30.591433 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:13:30.604752 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:13:30.617298 2584312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:13:30.714598 2584312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:13:30.813327 2584312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:13:30.825315 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:13:30.842956 2584312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:13:30.843070 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:30.854668 2584312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:13:30.854792 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:30.865678 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:30.876647 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:30.887910 2584312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:13:30.898070 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:30.907944 2584312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:30.917725 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:30.927756 2584312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:13:30.937062 2584312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:13:30.946013 2584312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:13:31.042961 2584312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:13:31.167568 2584312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:13:31.167691 2584312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:13:31.171616 2584312 start.go:563] Will wait 60s for crictl version
	I0915 07:13:31.171700 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:13:31.175665 2584312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:13:31.216156 2584312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0915 07:13:31.216278 2584312 ssh_runner.go:195] Run: crio --version
	I0915 07:13:31.258222 2584312 ssh_runner.go:195] Run: crio --version
	I0915 07:13:31.304304 2584312 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0915 07:13:31.306936 2584312 cli_runner.go:164] Run: docker network inspect ha-985632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 07:13:31.322421 2584312 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 07:13:31.326333 2584312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:13:31.340165 2584312 kubeadm.go:883] updating cluster {Name:ha-985632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-985632 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 07:13:31.340334 2584312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:13:31.340413 2584312 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:13:31.389046 2584312 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:13:31.389074 2584312 crio.go:433] Images already preloaded, skipping extraction
	I0915 07:13:31.389135 2584312 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 07:13:31.426640 2584312 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 07:13:31.426664 2584312 cache_images.go:84] Images are preloaded, skipping loading
	I0915 07:13:31.426674 2584312 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0915 07:13:31.426775 2584312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-985632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-985632 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:13:31.426863 2584312 ssh_runner.go:195] Run: crio config
	I0915 07:13:31.477955 2584312 cni.go:84] Creating CNI manager for ""
	I0915 07:13:31.477983 2584312 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0915 07:13:31.477993 2584312 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 07:13:31.478016 2584312 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-985632 NodeName:ha-985632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 07:13:31.478167 2584312 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-985632"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 07:13:31.478189 2584312 kube-vip.go:115] generating kube-vip config ...
	I0915 07:13:31.478242 2584312 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0915 07:13:31.491709 2584312 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0915 07:13:31.491820 2584312 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0915 07:13:31.491888 2584312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:13:31.501228 2584312 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:13:31.501351 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0915 07:13:31.510870 2584312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0915 07:13:31.530149 2584312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:13:31.549902 2584312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0915 07:13:31.570036 2584312 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0915 07:13:31.588877 2584312 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:13:31.592571 2584312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:13:31.604211 2584312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:13:31.696625 2584312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:13:31.711279 2584312 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632 for IP: 192.168.49.2
	I0915 07:13:31.711302 2584312 certs.go:194] generating shared ca certs ...
	I0915 07:13:31.711320 2584312 certs.go:226] acquiring lock for ca certs: {Name:mk5e6b4b1562ab546f1aa57699f236200f49d7e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:31.711518 2584312 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key
	I0915 07:13:31.711586 2584312 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key
	I0915 07:13:31.711601 2584312 certs.go:256] generating profile certs ...
	I0915 07:13:31.711703 2584312 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/client.key
	I0915 07:13:31.711748 2584312 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.key.f39b4577
	I0915 07:13:31.711774 2584312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.crt.f39b4577 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0915 07:13:31.995529 2584312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.crt.f39b4577 ...
	I0915 07:13:31.995562 2584312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.crt.f39b4577: {Name:mkf9c3abece6fcff0ebeed47bcc4a264a7b6d28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:31.995827 2584312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.key.f39b4577 ...
	I0915 07:13:31.995858 2584312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.key.f39b4577: {Name:mkc736f2d4c113ecbf326f036eadd8846077da87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:31.996004 2584312 certs.go:381] copying /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.crt.f39b4577 -> /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.crt
	I0915 07:13:31.996192 2584312 certs.go:385] copying /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.key.f39b4577 -> /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.key
	I0915 07:13:31.996387 2584312 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/proxy-client.key
	I0915 07:13:31.996409 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:13:31.996445 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:13:31.996464 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:13:31.996479 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:13:31.996515 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:13:31.996533 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:13:31.996565 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:13:31.996584 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:13:31.996652 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116.pem (1338 bytes)
	W0915 07:13:31.996707 2584312 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116_empty.pem, impossibly tiny 0 bytes
	I0915 07:13:31.996722 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:13:31.996762 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:13:31.996825 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:13:31.996868 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem (1675 bytes)
	I0915 07:13:31.996939 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem (1708 bytes)
	I0915 07:13:31.997020 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116.pem -> /usr/share/ca-certificates/2523116.pem
	I0915 07:13:31.997055 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem -> /usr/share/ca-certificates/25231162.pem
	I0915 07:13:31.997076 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:31.997699 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:13:32.029909 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:13:32.062618 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:13:32.087307 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 07:13:32.112484 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0915 07:13:32.137970 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:13:32.163644 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:13:32.190400 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:13:32.216649 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116.pem --> /usr/share/ca-certificates/2523116.pem (1338 bytes)
	I0915 07:13:32.243116 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem --> /usr/share/ca-certificates/25231162.pem (1708 bytes)
	I0915 07:13:32.269292 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:13:32.294988 2584312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 07:13:32.314524 2584312 ssh_runner.go:195] Run: openssl version
	I0915 07:13:32.320277 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2523116.pem && ln -fs /usr/share/ca-certificates/2523116.pem /etc/ssl/certs/2523116.pem"
	I0915 07:13:32.330206 2584312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2523116.pem
	I0915 07:13:32.334115 2584312 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:58 /usr/share/ca-certificates/2523116.pem
	I0915 07:13:32.334184 2584312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2523116.pem
	I0915 07:13:32.341644 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2523116.pem /etc/ssl/certs/51391683.0"
	I0915 07:13:32.351296 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25231162.pem && ln -fs /usr/share/ca-certificates/25231162.pem /etc/ssl/certs/25231162.pem"
	I0915 07:13:32.361465 2584312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25231162.pem
	I0915 07:13:32.365460 2584312 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:58 /usr/share/ca-certificates/25231162.pem
	I0915 07:13:32.365538 2584312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25231162.pem
	I0915 07:13:32.372774 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25231162.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:13:32.382747 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:13:32.392582 2584312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:32.396253 2584312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:38 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:32.396415 2584312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:32.404026 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:13:32.413308 2584312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:13:32.416910 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 07:13:32.423960 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 07:13:32.431206 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 07:13:32.438233 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 07:13:32.445341 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 07:13:32.452472 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 07:13:32.459712 2584312 kubeadm.go:392] StartCluster: {Name:ha-985632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-985632 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:13:32.459848 2584312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 07:13:32.459931 2584312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 07:13:32.498695 2584312 cri.go:89] found id: ""
	I0915 07:13:32.498828 2584312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 07:13:32.508211 2584312 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0915 07:13:32.508234 2584312 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0915 07:13:32.508311 2584312 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0915 07:13:32.517394 2584312 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 07:13:32.517890 2584312 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-985632" does not appear in /home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 07:13:32.518014 2584312 kubeconfig.go:62] /home/jenkins/minikube-integration/19644-2517725/kubeconfig needs updating (will repair): [kubeconfig missing "ha-985632" cluster setting kubeconfig missing "ha-985632" context setting]
	I0915 07:13:32.518293 2584312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/kubeconfig: {Name:mkc3f194059147bb4fbadd341bbbabf67fee0985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:32.518731 2584312 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 07:13:32.519090 2584312 kapi.go:59] client config for ha-985632: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/client.crt", KeyFile:"/home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/client.key", CAFile:"/home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1e6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 07:13:32.519779 2584312 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 07:13:32.519875 2584312 cert_rotation.go:140] Starting client certificate rotation controller
	I0915 07:13:32.531989 2584312 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0915 07:13:32.532015 2584312 kubeadm.go:597] duration metric: took 23.774446ms to restartPrimaryControlPlane
	I0915 07:13:32.532024 2584312 kubeadm.go:394] duration metric: took 72.321901ms to StartCluster
	I0915 07:13:32.532048 2584312 settings.go:142] acquiring lock: {Name:mka250035ae8fe54edf72ffd2d620ea51b449138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:32.532114 2584312 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 07:13:32.532780 2584312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-2517725/kubeconfig: {Name:mkc3f194059147bb4fbadd341bbbabf67fee0985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:32.533053 2584312 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:13:32.533084 2584312 start.go:241] waiting for startup goroutines ...
	I0915 07:13:32.533093 2584312 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 07:13:32.533590 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:13:32.537450 2584312 out.go:177] * Enabled addons: 
	I0915 07:13:32.540260 2584312 addons.go:510] duration metric: took 7.153082ms for enable addons: enabled=[]
	I0915 07:13:32.540313 2584312 start.go:246] waiting for cluster config update ...
	I0915 07:13:32.540323 2584312 start.go:255] writing updated cluster config ...
	I0915 07:13:32.543588 2584312 out.go:201] 
	I0915 07:13:32.546572 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:13:32.546710 2584312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/config.json ...
	I0915 07:13:32.550023 2584312 out.go:177] * Starting "ha-985632-m02" control-plane node in "ha-985632" cluster
	I0915 07:13:32.552757 2584312 cache.go:121] Beginning downloading kic base image for docker with crio
	I0915 07:13:32.555583 2584312 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 07:13:32.558125 2584312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:13:32.558165 2584312 cache.go:56] Caching tarball of preloaded images
	I0915 07:13:32.558206 2584312 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 07:13:32.558262 2584312 preload.go:172] Found /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0915 07:13:32.558276 2584312 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:13:32.558405 2584312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/config.json ...
	W0915 07:13:32.575517 2584312 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0915 07:13:32.575543 2584312 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 07:13:32.575641 2584312 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 07:13:32.575664 2584312 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 07:13:32.575673 2584312 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 07:13:32.575682 2584312 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 07:13:32.575695 2584312 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 07:13:32.576985 2584312 image.go:273] response: 
	I0915 07:13:32.698481 2584312 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 07:13:32.698519 2584312 cache.go:194] Successfully downloaded all kic artifacts
	I0915 07:13:32.698552 2584312 start.go:360] acquireMachinesLock for ha-985632-m02: {Name:mk1921c849c5893db157eb579919ef731f4791af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:13:32.698620 2584312 start.go:364] duration metric: took 47.449µs to acquireMachinesLock for "ha-985632-m02"
	I0915 07:13:32.698648 2584312 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:13:32.698655 2584312 fix.go:54] fixHost starting: m02
	I0915 07:13:32.698942 2584312 cli_runner.go:164] Run: docker container inspect ha-985632-m02 --format={{.State.Status}}
	I0915 07:13:32.716204 2584312 fix.go:112] recreateIfNeeded on ha-985632-m02: state=Stopped err=<nil>
	W0915 07:13:32.716235 2584312 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:13:32.721117 2584312 out.go:177] * Restarting existing docker container for "ha-985632-m02" ...
	I0915 07:13:32.723753 2584312 cli_runner.go:164] Run: docker start ha-985632-m02
	I0915 07:13:33.030914 2584312 cli_runner.go:164] Run: docker container inspect ha-985632-m02 --format={{.State.Status}}
	I0915 07:13:33.056812 2584312 kic.go:430] container "ha-985632-m02" state is running.
	I0915 07:13:33.057284 2584312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632-m02
	I0915 07:13:33.084204 2584312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/config.json ...
	I0915 07:13:33.084528 2584312 machine.go:93] provisionDockerMachine start ...
	I0915 07:13:33.084597 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m02
	I0915 07:13:33.107196 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:13:33.107571 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35813 <nil> <nil>}
	I0915 07:13:33.107586 2584312 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 07:13:33.108303 2584312 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51134->127.0.0.1:35813: read: connection reset by peer
	I0915 07:13:36.291769 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-985632-m02
	
	I0915 07:13:36.291797 2584312 ubuntu.go:169] provisioning hostname "ha-985632-m02"
	I0915 07:13:36.291862 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m02
	I0915 07:13:36.323858 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:13:36.324123 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35813 <nil> <nil>}
	I0915 07:13:36.324141 2584312 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-985632-m02 && echo "ha-985632-m02" | sudo tee /etc/hostname
	I0915 07:13:36.555532 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-985632-m02
	
	I0915 07:13:36.555697 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m02
	I0915 07:13:36.583862 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:13:36.584097 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35813 <nil> <nil>}
	I0915 07:13:36.584113 2584312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-985632-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-985632-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-985632-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:13:36.770231 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:13:36.770309 2584312 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-2517725/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-2517725/.minikube}
	I0915 07:13:36.770360 2584312 ubuntu.go:177] setting up certificates
	I0915 07:13:36.770386 2584312 provision.go:84] configureAuth start
	I0915 07:13:36.770465 2584312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632-m02
	I0915 07:13:36.802892 2584312 provision.go:143] copyHostCerts
	I0915 07:13:36.802936 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem
	I0915 07:13:36.802971 2584312 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem, removing ...
	I0915 07:13:36.802979 2584312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem
	I0915 07:13:36.803055 2584312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem (1082 bytes)
	I0915 07:13:36.803140 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem
	I0915 07:13:36.803159 2584312 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem, removing ...
	I0915 07:13:36.803164 2584312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem
	I0915 07:13:36.803191 2584312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem (1123 bytes)
	I0915 07:13:36.803232 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem
	I0915 07:13:36.803248 2584312 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem, removing ...
	I0915 07:13:36.803252 2584312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem
	I0915 07:13:36.803276 2584312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem (1675 bytes)
	I0915 07:13:36.803322 2584312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem org=jenkins.ha-985632-m02 san=[127.0.0.1 192.168.49.3 ha-985632-m02 localhost minikube]
	I0915 07:13:37.122502 2584312 provision.go:177] copyRemoteCerts
	I0915 07:13:37.122579 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:13:37.122627 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m02
	I0915 07:13:37.142626 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m02/id_rsa Username:docker}
	I0915 07:13:37.267098 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:13:37.267165 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:13:37.343465 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:13:37.343539 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 07:13:37.376027 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:13:37.376105 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 07:13:37.404197 2584312 provision.go:87] duration metric: took 633.782892ms to configureAuth
	I0915 07:13:37.404274 2584312 ubuntu.go:193] setting minikube options for container-runtime
	I0915 07:13:37.404570 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:13:37.404732 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m02
	I0915 07:13:37.428780 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:13:37.429628 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35813 <nil> <nil>}
	I0915 07:13:37.429682 2584312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:13:37.905123 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:13:37.905190 2584312 machine.go:96] duration metric: took 4.820645661s to provisionDockerMachine
	I0915 07:13:37.905217 2584312 start.go:293] postStartSetup for "ha-985632-m02" (driver="docker")
	I0915 07:13:37.905252 2584312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:13:37.905348 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:13:37.905424 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m02
	I0915 07:13:37.925205 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m02/id_rsa Username:docker}
	I0915 07:13:38.043352 2584312 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:13:38.047825 2584312 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 07:13:38.047863 2584312 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 07:13:38.047874 2584312 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 07:13:38.047883 2584312 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 07:13:38.047894 2584312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/addons for local assets ...
	I0915 07:13:38.047960 2584312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/files for local assets ...
	I0915 07:13:38.048043 2584312 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem -> 25231162.pem in /etc/ssl/certs
	I0915 07:13:38.048056 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem -> /etc/ssl/certs/25231162.pem
	I0915 07:13:38.048158 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:13:38.071445 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem --> /etc/ssl/certs/25231162.pem (1708 bytes)
	I0915 07:13:38.125999 2584312 start.go:296] duration metric: took 220.743564ms for postStartSetup
	I0915 07:13:38.126084 2584312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:13:38.126149 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m02
	I0915 07:13:38.147063 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m02/id_rsa Username:docker}
	I0915 07:13:38.314622 2584312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 07:13:38.329847 2584312 fix.go:56] duration metric: took 5.631182852s for fixHost
	I0915 07:13:38.329872 2584312 start.go:83] releasing machines lock for "ha-985632-m02", held for 5.631237423s
	I0915 07:13:38.329942 2584312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632-m02
	I0915 07:13:38.368109 2584312 out.go:177] * Found network options:
	I0915 07:13:38.371265 2584312 out.go:177]   - NO_PROXY=192.168.49.2
	W0915 07:13:38.374914 2584312 proxy.go:119] fail to check proxy env: Error ip not in block
	W0915 07:13:38.374963 2584312 proxy.go:119] fail to check proxy env: Error ip not in block
	I0915 07:13:38.375031 2584312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:13:38.375073 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m02
	I0915 07:13:38.375342 2584312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:13:38.375400 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m02
	I0915 07:13:38.411715 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m02/id_rsa Username:docker}
	I0915 07:13:38.415339 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35813 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m02/id_rsa Username:docker}
	I0915 07:13:38.793369 2584312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 07:13:38.935354 2584312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:13:38.980434 2584312 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0915 07:13:38.980521 2584312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:13:39.004378 2584312 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0915 07:13:39.004410 2584312 start.go:495] detecting cgroup driver to use...
	I0915 07:13:39.004464 2584312 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 07:13:39.004529 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:13:39.036400 2584312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:13:39.101730 2584312 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:13:39.101798 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:13:39.122240 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:13:39.190336 2584312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:13:39.472925 2584312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:13:39.777968 2584312 docker.go:233] disabling docker service ...
	I0915 07:13:39.778050 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:13:39.823764 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:13:39.863680 2584312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:13:40.163716 2584312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:13:40.470176 2584312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:13:40.523326 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:13:40.617589 2584312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:13:40.617731 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:40.682756 2584312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:13:40.682879 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:40.749437 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:40.812494 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:40.868928 2584312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:13:40.921509 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:40.969465 2584312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:40.994044 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:13:41.031808 2584312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:13:41.086522 2584312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:13:41.109856 2584312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:13:41.398928 2584312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:13:41.920307 2584312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:13:41.920385 2584312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:13:41.934749 2584312 start.go:563] Will wait 60s for crictl version
	I0915 07:13:41.934871 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:13:41.939408 2584312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:13:41.996793 2584312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0915 07:13:41.996974 2584312 ssh_runner.go:195] Run: crio --version
	I0915 07:13:42.075860 2584312 ssh_runner.go:195] Run: crio --version
	I0915 07:13:42.220306 2584312 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0915 07:13:42.223013 2584312 out.go:177]   - env NO_PROXY=192.168.49.2
	I0915 07:13:42.225732 2584312 cli_runner.go:164] Run: docker network inspect ha-985632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 07:13:42.252008 2584312 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 07:13:42.260796 2584312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:13:42.283452 2584312 mustload.go:65] Loading cluster: ha-985632
	I0915 07:13:42.283723 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:13:42.284067 2584312 cli_runner.go:164] Run: docker container inspect ha-985632 --format={{.State.Status}}
	I0915 07:13:42.317027 2584312 host.go:66] Checking if "ha-985632" exists ...
	I0915 07:13:42.317328 2584312 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632 for IP: 192.168.49.3
	I0915 07:13:42.317340 2584312 certs.go:194] generating shared ca certs ...
	I0915 07:13:42.317355 2584312 certs.go:226] acquiring lock for ca certs: {Name:mk5e6b4b1562ab546f1aa57699f236200f49d7e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:13:42.317480 2584312 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key
	I0915 07:13:42.317527 2584312 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key
	I0915 07:13:42.317539 2584312 certs.go:256] generating profile certs ...
	I0915 07:13:42.317631 2584312 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/client.key
	I0915 07:13:42.317708 2584312 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.key.cb23a9f0
	I0915 07:13:42.317754 2584312 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/proxy-client.key
	I0915 07:13:42.317768 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:13:42.317782 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:13:42.317810 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:13:42.317851 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:13:42.317875 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 07:13:42.317896 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 07:13:42.317912 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 07:13:42.317927 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 07:13:42.317982 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116.pem (1338 bytes)
	W0915 07:13:42.318018 2584312 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116_empty.pem, impossibly tiny 0 bytes
	I0915 07:13:42.318032 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:13:42.318067 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:13:42.318107 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:13:42.318138 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem (1675 bytes)
	I0915 07:13:42.318187 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem (1708 bytes)
	I0915 07:13:42.318219 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116.pem -> /usr/share/ca-certificates/2523116.pem
	I0915 07:13:42.318236 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem -> /usr/share/ca-certificates/25231162.pem
	I0915 07:13:42.318248 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:42.318320 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:13:42.346890 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35808 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632/id_rsa Username:docker}
	I0915 07:13:42.461101 2584312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0915 07:13:42.474515 2584312 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0915 07:13:42.507530 2584312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0915 07:13:42.516082 2584312 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0915 07:13:42.547869 2584312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0915 07:13:42.559156 2584312 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0915 07:13:42.592210 2584312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0915 07:13:42.604175 2584312 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0915 07:13:42.627665 2584312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0915 07:13:42.640291 2584312 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0915 07:13:42.674615 2584312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0915 07:13:42.687462 2584312 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0915 07:13:42.706574 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:13:42.733248 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:13:42.769872 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:13:42.798864 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 07:13:42.826253 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0915 07:13:42.857798 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 07:13:42.885402 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 07:13:42.912227 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 07:13:42.958457 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116.pem --> /usr/share/ca-certificates/2523116.pem (1338 bytes)
	I0915 07:13:42.988154 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem --> /usr/share/ca-certificates/25231162.pem (1708 bytes)
	I0915 07:13:43.027159 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:13:43.055252 2584312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0915 07:13:43.078705 2584312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0915 07:13:43.099651 2584312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0915 07:13:43.120468 2584312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0915 07:13:43.144738 2584312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0915 07:13:43.164392 2584312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0915 07:13:43.189402 2584312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0915 07:13:43.209468 2584312 ssh_runner.go:195] Run: openssl version
	I0915 07:13:43.221579 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25231162.pem && ln -fs /usr/share/ca-certificates/25231162.pem /etc/ssl/certs/25231162.pem"
	I0915 07:13:43.238298 2584312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25231162.pem
	I0915 07:13:43.242847 2584312 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:58 /usr/share/ca-certificates/25231162.pem
	I0915 07:13:43.242965 2584312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25231162.pem
	I0915 07:13:43.257118 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25231162.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:13:43.274288 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:13:43.285379 2584312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:43.294649 2584312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:38 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:43.294767 2584312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:13:43.305725 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:13:43.322347 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2523116.pem && ln -fs /usr/share/ca-certificates/2523116.pem /etc/ssl/certs/2523116.pem"
	I0915 07:13:43.336082 2584312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2523116.pem
	I0915 07:13:43.340515 2584312 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:58 /usr/share/ca-certificates/2523116.pem
	I0915 07:13:43.340639 2584312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2523116.pem
	I0915 07:13:43.353371 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2523116.pem /etc/ssl/certs/51391683.0"
	I0915 07:13:43.368378 2584312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:13:43.376375 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 07:13:43.389462 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 07:13:43.402921 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 07:13:43.413458 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 07:13:43.422482 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 07:13:43.430116 2584312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 07:13:43.442056 2584312 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I0915 07:13:43.442225 2584312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-985632-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-985632 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:13:43.442274 2584312 kube-vip.go:115] generating kube-vip config ...
	I0915 07:13:43.442353 2584312 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0915 07:13:43.470270 2584312 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0915 07:13:43.470419 2584312 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0915 07:13:43.470532 2584312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:13:43.481004 2584312 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:13:43.481167 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0915 07:13:43.494376 2584312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0915 07:13:43.523445 2584312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:13:43.551477 2584312 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0915 07:13:43.579299 2584312 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:13:43.587942 2584312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:13:43.599553 2584312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:13:43.789200 2584312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:13:43.806265 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:13:43.805973 2584312 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 07:13:43.811872 2584312 out.go:177] * Verifying Kubernetes components...
	I0915 07:13:43.814483 2584312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:13:44.004949 2584312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:13:44.022465 2584312 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 07:13:44.022843 2584312 kapi.go:59] client config for ha-985632: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/client.crt", KeyFile:"/home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/client.key", CAFile:"/home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1e6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0915 07:13:44.022943 2584312 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0915 07:13:44.023239 2584312 node_ready.go:35] waiting up to 6m0s for node "ha-985632-m02" to be "Ready" ...
	I0915 07:13:44.023368 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:44.023404 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:44.023428 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:44.023451 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:52.543781 2584312 round_trippers.go:574] Response Status: 500 Internal Server Error in 8520 milliseconds
	I0915 07:13:52.544678 2584312 node_ready.go:53] error getting node "ha-985632-m02": etcdserver: leader changed
	I0915 07:13:52.544745 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:52.544751 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:52.544759 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:52.544764 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:52.550731 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:13:52.552095 2584312 node_ready.go:49] node "ha-985632-m02" has status "Ready":"True"
	I0915 07:13:52.552119 2584312 node_ready.go:38] duration metric: took 8.528841705s for node "ha-985632-m02" to be "Ready" ...
	I0915 07:13:52.552129 2584312 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:13:52.552171 2584312 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0915 07:13:52.552182 2584312 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0915 07:13:52.552239 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0915 07:13:52.552243 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:52.552251 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:52.552256 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:52.554791 2584312 round_trippers.go:574] Response Status: 429 Too Many Requests in 2 milliseconds
	I0915 07:13:53.554963 2584312 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0915 07:13:53.555011 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0915 07:13:53.555017 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:53.555025 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:53.555031 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:53.571831 2584312 round_trippers.go:574] Response Status: 429 Too Many Requests in 16 milliseconds
	I0915 07:13:54.576543 2584312 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0915 07:13:54.576590 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0915 07:13:54.576596 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.576605 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.576611 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.586673 2584312 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0915 07:13:54.598356 2584312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fr4vw" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:54.598537 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fr4vw
	I0915 07:13:54.598566 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.598593 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.598615 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.602623 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:13:54.603270 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:13:54.603282 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.603291 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.603294 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.606653 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:13:54.607134 2584312 pod_ready.go:93] pod "coredns-7c65d6cfc9-fr4vw" in "kube-system" namespace has status "Ready":"True"
	I0915 07:13:54.607146 2584312 pod_ready.go:82] duration metric: took 8.716202ms for pod "coredns-7c65d6cfc9-fr4vw" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:54.607156 2584312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l2k54" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:54.607218 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-l2k54
	I0915 07:13:54.607224 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.607231 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.607236 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.610811 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:13:54.612541 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:13:54.612595 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.612619 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.612643 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.617106 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:13:54.618279 2584312 pod_ready.go:93] pod "coredns-7c65d6cfc9-l2k54" in "kube-system" namespace has status "Ready":"True"
	I0915 07:13:54.618332 2584312 pod_ready.go:82] duration metric: took 11.169098ms for pod "coredns-7c65d6cfc9-l2k54" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:54.618359 2584312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:54.618455 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-985632
	I0915 07:13:54.618482 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.618508 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.618529 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.622280 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:13:54.623456 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:13:54.623514 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.623538 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.623562 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.627939 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:13:54.628679 2584312 pod_ready.go:93] pod "etcd-ha-985632" in "kube-system" namespace has status "Ready":"True"
	I0915 07:13:54.628741 2584312 pod_ready.go:82] duration metric: took 10.359419ms for pod "etcd-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:54.628769 2584312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:54.628897 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-985632-m02
	I0915 07:13:54.628924 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.628946 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.628968 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.631843 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:13:54.632575 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:54.632621 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.632644 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.632669 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.635780 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:13:54.636351 2584312 pod_ready.go:93] pod "etcd-ha-985632-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:13:54.636395 2584312 pod_ready.go:82] duration metric: took 7.600242ms for pod "etcd-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:54.636435 2584312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-985632-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:54.636529 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-985632-m03
	I0915 07:13:54.636554 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.636577 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.636603 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.639631 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:13:54.776995 2584312 request.go:632] Waited for 136.279427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632-m03
	I0915 07:13:54.777132 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m03
	I0915 07:13:54.777145 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.777156 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.777161 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.779790 2584312 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0915 07:13:54.779925 2584312 pod_ready.go:98] node "ha-985632-m03" hosting pod "etcd-ha-985632-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-985632-m03": nodes "ha-985632-m03" not found
	I0915 07:13:54.779945 2584312 pod_ready.go:82] duration metric: took 143.489294ms for pod "etcd-ha-985632-m03" in "kube-system" namespace to be "Ready" ...
	E0915 07:13:54.779955 2584312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-985632-m03" hosting pod "etcd-ha-985632-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-985632-m03": nodes "ha-985632-m03" not found
	I0915 07:13:54.779975 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:54.977275 2584312 request.go:632] Waited for 197.226917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:13:54.977386 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:13:54.977400 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:54.977409 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:54.977433 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:54.985316 2584312 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 07:13:55.177747 2584312 request.go:632] Waited for 191.47893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:13:55.177819 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:13:55.177846 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:55.177861 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:55.177866 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:55.206328 2584312 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0915 07:13:55.207598 2584312 pod_ready.go:93] pod "kube-apiserver-ha-985632" in "kube-system" namespace has status "Ready":"True"
	I0915 07:13:55.207631 2584312 pod_ready.go:82] duration metric: took 427.644289ms for pod "kube-apiserver-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:55.207649 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:13:55.377032 2584312 request.go:632] Waited for 169.289439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:55.377096 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:55.377102 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:55.377111 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:55.377118 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:55.383087 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:13:55.577414 2584312 request.go:632] Waited for 193.294327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:55.577472 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:55.577487 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:55.577497 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:55.577508 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:55.580625 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:13:55.776989 2584312 request.go:632] Waited for 68.196971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:55.777057 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:55.777068 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:55.777077 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:55.777086 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:55.779998 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:13:55.977301 2584312 request.go:632] Waited for 196.301926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:55.977362 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:55.977371 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:55.977381 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:55.977390 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:55.980565 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:13:56.208638 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:56.208668 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:56.208679 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:56.208682 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:56.257872 2584312 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0915 07:13:56.377099 2584312 request.go:632] Waited for 118.224539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:56.377175 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:56.377189 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:56.377198 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:56.377207 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:56.385715 2584312 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 07:13:56.707840 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:56.708028 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:56.708059 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:56.708084 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:56.745705 2584312 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0915 07:13:56.776821 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:56.776887 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:56.776913 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:56.776935 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:56.785711 2584312 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 07:13:57.208881 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:57.208960 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:57.208984 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:57.209005 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:57.304556 2584312 round_trippers.go:574] Response Status: 200 OK in 95 milliseconds
	I0915 07:13:57.311895 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:57.311974 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:57.311998 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:57.312022 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:57.335260 2584312 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0915 07:13:57.335936 2584312 pod_ready.go:103] pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace has status "Ready":"False"
	I0915 07:13:57.708252 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:57.708317 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:57.708353 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:57.708371 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:57.801134 2584312 round_trippers.go:574] Response Status: 200 OK in 92 milliseconds
	I0915 07:13:57.812422 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:57.812515 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:57.812540 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:57.812563 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:57.974816 2584312 round_trippers.go:574] Response Status: 200 OK in 162 milliseconds
	I0915 07:13:58.207858 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:58.207941 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:58.207966 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:58.207990 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:58.302111 2584312 round_trippers.go:574] Response Status: 200 OK in 94 milliseconds
	I0915 07:13:58.303758 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:58.303832 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:58.303856 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:58.303883 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:58.356149 2584312 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0915 07:13:58.707964 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:58.708042 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:58.708067 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:58.708092 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:58.714707 2584312 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:13:58.715697 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:58.715766 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:58.715790 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:58.715812 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:58.738183 2584312 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0915 07:13:59.208226 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:59.208301 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:59.208324 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:59.208346 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:59.224626 2584312 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0915 07:13:59.234720 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:59.234797 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:59.234822 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:59.234847 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:59.281194 2584312 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0915 07:13:59.708400 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:13:59.708418 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:59.708428 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:59.708432 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:59.753460 2584312 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0915 07:13:59.754672 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:13:59.754690 2584312 round_trippers.go:469] Request Headers:
	I0915 07:13:59.754710 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:13:59.754716 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:13:59.793646 2584312 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0915 07:13:59.795741 2584312 pod_ready.go:103] pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:00.208237 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:00.208259 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:00.208268 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:00.208273 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:00.281126 2584312 round_trippers.go:574] Response Status: 200 OK in 72 milliseconds
	I0915 07:14:00.281907 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:00.281920 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:00.281929 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:00.281933 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:00.333576 2584312 round_trippers.go:574] Response Status: 200 OK in 51 milliseconds
	I0915 07:14:00.708496 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:00.708518 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:00.708528 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:00.708533 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:00.717068 2584312 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 07:14:00.718383 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:00.718404 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:00.718414 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:00.718417 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:00.726265 2584312 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 07:14:01.208793 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:01.208828 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:01.208839 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:01.208843 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:01.212081 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:01.213034 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:01.213054 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:01.213066 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:01.213071 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:01.215941 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:01.708867 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:01.708888 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:01.708896 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:01.708900 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:01.714072 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:14:01.714875 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:01.714890 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:01.714900 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:01.714906 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:01.717451 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:02.207868 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:02.207891 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:02.207900 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:02.207905 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:02.212857 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:14:02.213786 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:02.213844 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:02.213874 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:02.213899 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:02.222104 2584312 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 07:14:02.223282 2584312 pod_ready.go:103] pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:02.708122 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:02.708141 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:02.708151 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:02.708156 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:02.712024 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:02.713191 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:02.713212 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:02.713222 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:02.713227 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:02.715754 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:03.208333 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:03.208356 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:03.208365 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:03.208371 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:03.211549 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:03.212381 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:03.212402 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:03.212412 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:03.212418 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:03.215133 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:03.708173 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:03.708193 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:03.708202 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:03.708206 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:03.713821 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:14:03.715314 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:03.715338 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:03.715348 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:03.715357 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:03.721271 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:14:04.208748 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:04.208770 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:04.208779 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:04.208784 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:04.211880 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:04.212760 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:04.212781 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:04.212791 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:04.212797 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:04.215402 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:04.708522 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:04.708574 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:04.708585 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:04.708591 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:04.711724 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:04.712474 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:04.712485 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:04.712492 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:04.712496 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:04.715137 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:04.715714 2584312 pod_ready.go:103] pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:05.207890 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:05.207912 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:05.207921 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:05.207925 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:05.211796 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:05.212779 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:05.212813 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:05.212823 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:05.212827 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:05.216703 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:05.707937 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:05.707960 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:05.707970 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:05.707975 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:05.711587 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:05.712478 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:05.712497 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:05.712508 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:05.712512 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:05.715379 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:06.208560 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:06.208585 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:06.208596 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:06.208601 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:06.211419 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:06.212542 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:06.212565 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:06.212574 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:06.212580 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:06.215250 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:06.707920 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:06.707946 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:06.707956 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:06.707967 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:06.711029 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:06.711845 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:06.711858 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:06.711867 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:06.711870 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:06.714562 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:07.208718 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:07.208737 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:07.208746 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:07.208749 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:07.212988 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:14:07.213917 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:07.213937 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:07.213947 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:07.213951 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:07.216831 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:07.217443 2584312 pod_ready.go:103] pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:07.707936 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:07.707962 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:07.707972 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:07.707976 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:07.711073 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:07.712025 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:07.712049 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:07.712060 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:07.712064 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:07.715006 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:08.208767 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:08.208789 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:08.208798 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:08.208829 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:08.211783 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:08.212655 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:08.212673 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:08.212682 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:08.212688 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:08.215629 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:08.707812 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:08.707838 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:08.707848 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:08.707853 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:08.710988 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:08.712183 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:08.712242 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:08.712273 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:08.712303 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:08.715256 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:09.208847 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:09.208868 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:09.208878 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:09.208882 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:09.214752 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:14:09.215872 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:09.215894 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:09.215904 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:09.215909 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:09.218534 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:09.219158 2584312 pod_ready.go:103] pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:09.707894 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:09.707914 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:09.707922 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:09.707927 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:09.710980 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:09.711819 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:09.711836 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:09.711846 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:09.711852 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:09.714577 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:10.208735 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:10.208764 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:10.208775 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:10.208782 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:10.211799 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:10.213028 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:10.213104 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:10.213123 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:10.213130 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:10.215745 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:10.707933 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:14:10.707957 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:10.707967 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:10.707971 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:10.710859 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:10.711731 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:10.711750 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:10.711760 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:10.711765 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:10.714536 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:10.715276 2584312 pod_ready.go:93] pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:14:10.715300 2584312 pod_ready.go:82] duration metric: took 15.507641597s for pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:10.715312 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-985632-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:10.715388 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m03
	I0915 07:14:10.715399 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:10.715407 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:10.715412 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:10.718248 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:10.719137 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m03
	I0915 07:14:10.719159 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:10.719168 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:10.719173 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:10.721799 2584312 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0915 07:14:10.721987 2584312 pod_ready.go:98] node "ha-985632-m03" hosting pod "kube-apiserver-ha-985632-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-985632-m03": nodes "ha-985632-m03" not found
	I0915 07:14:10.722006 2584312 pod_ready.go:82] duration metric: took 6.687099ms for pod "kube-apiserver-ha-985632-m03" in "kube-system" namespace to be "Ready" ...
	E0915 07:14:10.722017 2584312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-985632-m03" hosting pod "kube-apiserver-ha-985632-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-985632-m03": nodes "ha-985632-m03" not found
	I0915 07:14:10.722025 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:10.722101 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:10.722109 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:10.722118 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:10.722122 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:10.725074 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:10.725970 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:10.725991 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:10.726000 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:10.726006 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:10.728532 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:11.222750 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:11.222773 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:11.222782 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:11.222788 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:11.227927 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:14:11.228837 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:11.228859 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:11.228869 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:11.228874 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:11.239907 2584312 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0915 07:14:11.722870 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:11.722896 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:11.722906 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:11.722910 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:11.725895 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:11.726892 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:11.726914 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:11.726924 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:11.726928 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:11.729700 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:12.222283 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:12.222307 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:12.222317 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:12.222322 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:12.225228 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:12.226185 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:12.226207 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:12.226217 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:12.226222 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:12.228939 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:12.722899 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:12.722926 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:12.722936 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:12.722942 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:12.725888 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:12.726698 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:12.726719 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:12.726731 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:12.726736 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:12.729404 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:12.730492 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:13.222205 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:13.222229 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:13.222240 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:13.222247 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:13.225462 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:13.226240 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:13.226293 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:13.226363 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:13.226373 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:13.229128 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:13.722461 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:13.722487 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:13.722497 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:13.722503 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:13.725624 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:13.726662 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:13.726684 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:13.726693 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:13.726699 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:13.729443 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:14.223202 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:14.223227 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:14.223237 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:14.223244 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:14.226782 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:14.227851 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:14.227872 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:14.227881 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:14.227885 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:14.230605 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:14.722788 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:14.722816 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:14.722826 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:14.722832 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:14.726186 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:14.727350 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:14.727375 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:14.727386 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:14.727390 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:14.730393 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:14.731267 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:15.222579 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:15.222645 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:15.222663 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:15.222667 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:15.226058 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:15.226872 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:15.226891 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:15.226902 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:15.226906 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:15.231026 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:14:15.723155 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:15.723182 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:15.723193 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:15.723199 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:15.726184 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:15.727471 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:15.727494 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:15.727505 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:15.727509 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:15.730869 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:16.223212 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:16.223240 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:16.223251 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:16.223255 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:16.226291 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:16.227228 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:16.227248 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:16.227258 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:16.227265 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:16.230955 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:16.722921 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:16.722947 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:16.722957 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:16.722963 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:16.726079 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:16.726847 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:16.726867 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:16.726878 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:16.726883 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:16.729624 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:16.736863 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:17.223187 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:17.223211 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:17.223220 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:17.223224 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:17.226254 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:17.227065 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:17.227085 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:17.227094 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:17.227102 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:17.229837 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:17.722875 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:17.722901 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:17.722921 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:17.722927 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:17.726050 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:17.726992 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:17.727016 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:17.727026 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:17.727031 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:17.730023 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:18.223065 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:18.223093 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:18.223102 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:18.223107 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:18.226103 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:18.227213 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:18.227235 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:18.227245 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:18.227250 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:18.229973 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:18.722333 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:18.722358 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:18.722368 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:18.722373 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:18.725898 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:18.727096 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:18.727123 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:18.727133 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:18.727137 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:18.729772 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:19.223202 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:19.223226 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:19.223237 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:19.223241 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:19.226525 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:19.227376 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:19.227396 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:19.227406 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:19.227411 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:19.230597 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:19.231270 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:19.722475 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:19.722499 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:19.722509 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:19.722513 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:19.725458 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:19.726225 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:19.726244 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:19.726254 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:19.726258 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:19.728833 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:20.222315 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:20.222405 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:20.222423 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:20.222434 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:20.225821 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:20.227352 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:20.227422 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:20.227460 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:20.227487 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:20.231367 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:20.722259 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:20.722281 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:20.722291 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:20.722294 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:20.726056 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:20.727155 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:20.727182 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:20.727192 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:20.727198 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:20.729983 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:21.222212 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:21.222239 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:21.222289 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:21.222294 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:21.225344 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:21.226169 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:21.226192 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:21.226203 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:21.226208 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:21.229002 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:21.723013 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:21.723039 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:21.723049 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:21.723056 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:21.726106 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:21.726966 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:21.726989 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:21.726999 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:21.727005 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:21.730031 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:21.730724 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:22.222234 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:22.222259 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:22.222270 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:22.222277 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:22.225507 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:22.226441 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:22.226465 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:22.226475 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:22.226482 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:22.229246 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:22.722314 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:22.722341 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:22.722351 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:22.722356 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:22.725362 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:22.726368 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:22.726390 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:22.726402 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:22.726408 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:22.728948 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:23.222289 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:23.222310 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:23.222319 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:23.222323 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:23.225293 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:23.226409 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:23.226431 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:23.226441 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:23.226446 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:23.229026 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:23.722267 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:23.722293 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:23.722303 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:23.722309 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:23.725612 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:23.726626 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:23.726647 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:23.726658 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:23.726664 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:23.729872 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:24.222224 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:24.222249 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:24.222259 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:24.222263 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:24.225294 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:24.226112 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:24.226128 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:24.226137 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:24.226141 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:24.228788 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:24.229575 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:24.722753 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:24.722777 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:24.722788 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:24.722794 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:24.725910 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:24.726820 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:24.726838 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:24.726847 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:24.726851 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:24.730317 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:25.222481 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:25.222506 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:25.222513 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:25.222518 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:25.225621 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:25.226427 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:25.226450 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:25.226460 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:25.226467 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:25.229528 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:25.722224 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:25.722250 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:25.722260 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:25.722264 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:25.725617 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:25.726562 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:25.726587 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:25.726600 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:25.726606 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:25.729630 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:26.222319 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:26.222344 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:26.222355 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:26.222360 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:26.225778 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:26.226794 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:26.226816 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:26.226826 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:26.226834 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:26.230709 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:26.231415 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:26.723063 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:26.723086 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:26.723096 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:26.723100 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:26.726124 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:26.727105 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:26.727131 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:26.727140 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:26.727144 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:26.731277 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:14:27.223301 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:27.223323 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:27.223332 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:27.223338 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:27.226322 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:27.227025 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:27.227044 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:27.227054 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:27.227058 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:27.229687 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:27.723009 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:27.723034 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:27.723045 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:27.723049 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:27.725935 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:27.726986 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:27.727043 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:27.727058 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:27.727065 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:27.730259 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:28.222744 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:28.222768 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:28.222778 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:28.222784 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:28.225933 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:28.226709 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:28.226730 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:28.226738 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:28.226742 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:28.229520 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:28.722717 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:28.722751 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:28.722762 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:28.722769 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:28.726310 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:28.727171 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:28.727193 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:28.727203 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:28.727208 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:28.730483 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:28.731091 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:29.222246 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:29.222321 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:29.222346 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:29.222363 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:29.226210 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:29.227885 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:29.227966 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:29.228048 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:29.228131 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:29.234544 2584312 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:14:29.723228 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:29.723247 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:29.723257 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:29.723263 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:29.727449 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:14:29.728531 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:29.728554 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:29.728565 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:29.728570 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:29.736848 2584312 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 07:14:30.222311 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:30.222336 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:30.222346 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:30.222352 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:30.225706 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:30.226799 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:30.226824 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:30.226835 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:30.226841 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:30.229820 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:30.723207 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:30.723231 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:30.723241 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:30.723245 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:30.726532 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:30.727324 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:30.727344 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:30.727354 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:30.727361 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:30.730211 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:30.731335 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:31.222303 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:31.222385 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:31.222395 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:31.222400 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:31.225416 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:31.226307 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:31.226338 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:31.226349 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:31.226353 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:31.229152 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:31.722439 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:31.722462 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:31.722473 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:31.722479 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:31.725307 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:31.726225 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:31.726246 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:31.726255 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:31.726261 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:31.728971 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:32.222301 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:32.222321 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:32.222331 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:32.222336 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:32.225417 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:32.226326 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:32.226348 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:32.226359 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:32.226367 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:32.230150 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:32.722877 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:32.722896 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:32.722914 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:32.722920 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:32.727727 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:14:32.729051 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:32.729119 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:32.729143 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:32.729166 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:32.733104 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:32.734364 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:33.222280 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:33.222302 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:33.222312 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:33.222317 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:33.225573 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:33.226878 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:33.226963 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:33.226989 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:33.227011 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:33.230724 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:33.722345 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:33.722370 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:33.722382 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:33.722392 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:33.727315 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:14:33.729347 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:33.729369 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:33.729378 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:33.729382 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:33.735314 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:14:34.222882 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:34.222916 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:34.222926 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:34.222932 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:34.240073 2584312 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0915 07:14:34.242672 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:34.242692 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:34.242701 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:34.242707 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:34.250689 2584312 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 07:14:34.722873 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:34.722902 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:34.722912 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:34.722918 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:34.726472 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:34.727750 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:34.727771 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:34.727782 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:34.727794 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:34.731727 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:35.222277 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:35.222305 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:35.222315 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:35.222321 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:35.225524 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:35.226618 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:35.226638 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:35.226649 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:35.226655 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:35.229568 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:35.230251 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:35.722908 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:35.722936 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:35.722946 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:35.722956 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:35.726016 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:35.727021 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:35.727044 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:35.727053 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:35.727060 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:35.729950 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:36.223050 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:36.223089 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:36.223100 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:36.223105 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:36.226558 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:36.227510 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:36.227530 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:36.227539 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:36.227543 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:36.230920 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:36.722896 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:36.722919 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:36.722928 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:36.722934 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:36.725882 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:36.726767 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:36.726786 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:36.726795 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:36.726800 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:36.729488 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:37.222774 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:37.222796 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:37.222805 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:37.222810 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:37.225782 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:37.226957 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:37.226980 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:37.226991 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:37.226997 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:37.229608 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:37.230455 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:37.723326 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:37.723354 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:37.723364 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:37.723368 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:37.726681 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:37.727530 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:37.727552 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:37.727561 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:37.727565 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:37.730233 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:38.222279 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:38.222318 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:38.222328 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:38.222333 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:38.225514 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:38.226632 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:38.226652 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:38.226661 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:38.226666 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:38.229645 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:38.723010 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:38.723032 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:38.723042 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:38.723048 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:38.730205 2584312 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 07:14:38.731667 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:38.731691 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:38.731702 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:38.731707 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:38.735962 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:14:39.223047 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:39.223075 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:39.223089 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:39.223094 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:39.226520 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:39.227502 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:39.227523 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:39.227533 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:39.227537 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:39.230273 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:39.231108 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:39.722592 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:39.722615 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:39.722624 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:39.722631 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:39.725431 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:39.726178 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:39.726199 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:39.726209 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:39.726213 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:39.728788 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:40.222949 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:40.222975 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:40.222984 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:40.222989 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:40.226296 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:40.227576 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:40.227603 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:40.227613 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:40.227624 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:40.230547 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:40.722919 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:40.722942 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:40.722952 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:40.722959 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:40.725993 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:40.726859 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:40.726876 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:40.726886 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:40.726891 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:40.729433 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:41.222221 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:41.222249 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:41.222260 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:41.222266 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:41.225549 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:41.226322 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:41.226342 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:41.226351 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:41.226355 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:41.228940 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:41.722282 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:41.722309 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:41.722319 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:41.722324 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:41.725254 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:41.726147 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:41.726166 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:41.726176 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:41.726180 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:41.728847 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:41.729495 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:42.224131 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:42.224166 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:42.224185 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:42.224192 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:42.227664 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:42.228670 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:42.228695 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:42.228706 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:42.228711 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:42.232030 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:42.722651 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:42.722675 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:42.722686 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:42.722691 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:42.725487 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:42.726505 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:42.726525 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:42.726535 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:42.726539 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:42.729205 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:43.222859 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:43.222894 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:43.222909 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:43.222914 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:43.225866 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:43.227014 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:43.227036 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:43.227045 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:43.227050 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:43.229705 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:43.722861 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:43.722885 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:43.722942 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:43.722952 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:43.726041 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:43.726897 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:43.726920 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:43.726930 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:43.726934 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:43.730130 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:43.730773 2584312 pod_ready.go:103] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:14:44.222453 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:44.222480 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:44.222489 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:44.222493 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:44.225987 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:44.227113 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:44.227132 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:44.227144 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:44.227151 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:44.230125 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:44.722877 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:44.722968 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:44.722993 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:44.723017 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:44.726020 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:44.727273 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:44.727335 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:44.727357 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:44.727378 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:44.730272 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:45.223239 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:14:45.223270 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:45.223282 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:45.223289 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:45.236256 2584312 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0915 07:14:45.238747 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:45.238777 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:45.238788 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:45.238794 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:45.261598 2584312 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0915 07:14:45.263435 2584312 pod_ready.go:93] pod "kube-controller-manager-ha-985632" in "kube-system" namespace has status "Ready":"True"
	I0915 07:14:45.263471 2584312 pod_ready.go:82] duration metric: took 34.54142912s for pod "kube-controller-manager-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:45.263487 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:45.263566 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632-m02
	I0915 07:14:45.263579 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:45.263587 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:45.263591 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:45.281534 2584312 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0915 07:14:45.285532 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:45.285575 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:45.285587 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:45.285592 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:45.292460 2584312 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:14:45.295533 2584312 pod_ready.go:93] pod "kube-controller-manager-ha-985632-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:14:45.295563 2584312 pod_ready.go:82] duration metric: took 32.068094ms for pod "kube-controller-manager-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:45.295576 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-985632-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:45.295673 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632-m03
	I0915 07:14:45.295685 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:45.295694 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:45.295698 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:45.306112 2584312 round_trippers.go:574] Response Status: 404 Not Found in 10 milliseconds
	I0915 07:14:45.306589 2584312 pod_ready.go:98] error getting pod "kube-controller-manager-ha-985632-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-985632-m03" not found
	I0915 07:14:45.306658 2584312 pod_ready.go:82] duration metric: took 11.072461ms for pod "kube-controller-manager-ha-985632-m03" in "kube-system" namespace to be "Ready" ...
	E0915 07:14:45.306689 2584312 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-985632-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-985632-m03" not found
	I0915 07:14:45.306728 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2kqsm" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:45.306856 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2kqsm
	I0915 07:14:45.306891 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:45.306932 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:45.306972 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:45.315237 2584312 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 07:14:45.316437 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m03
	I0915 07:14:45.316516 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:45.316544 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:45.316566 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:45.319663 2584312 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0915 07:14:45.320096 2584312 pod_ready.go:98] node "ha-985632-m03" hosting pod "kube-proxy-2kqsm" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-985632-m03": nodes "ha-985632-m03" not found
	I0915 07:14:45.320130 2584312 pod_ready.go:82] duration metric: took 13.369923ms for pod "kube-proxy-2kqsm" in "kube-system" namespace to be "Ready" ...
	E0915 07:14:45.320143 2584312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-985632-m03" hosting pod "kube-proxy-2kqsm" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-985632-m03": nodes "ha-985632-m03" not found
	I0915 07:14:45.320151 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5fsgj" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:45.320241 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5fsgj
	I0915 07:14:45.320254 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:45.320265 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:45.320280 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:45.325004 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:14:45.326258 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:45.326293 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:45.326303 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:45.326306 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:45.332009 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:14:45.333558 2584312 pod_ready.go:93] pod "kube-proxy-5fsgj" in "kube-system" namespace has status "Ready":"True"
	I0915 07:14:45.333586 2584312 pod_ready.go:82] duration metric: took 13.424149ms for pod "kube-proxy-5fsgj" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:45.333599 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hwpmv" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:45.333695 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hwpmv
	I0915 07:14:45.333715 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:45.333724 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:45.333729 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:45.549387 2584312 round_trippers.go:574] Response Status:  in 215 milliseconds
	I0915 07:14:46.549708 2584312 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hwpmv
	I0915 07:14:46.549769 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hwpmv
	I0915 07:14:46.549776 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:46.549791 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:46.549797 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:49.228771 2584312 round_trippers.go:574] Response Status: 200 OK in 2678 milliseconds
	I0915 07:14:49.246513 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:49.246545 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:49.246559 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:49.246563 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:49.362138 2584312 round_trippers.go:574] Response Status: 200 OK in 115 milliseconds
	I0915 07:14:49.363582 2584312 pod_ready.go:93] pod "kube-proxy-hwpmv" in "kube-system" namespace has status "Ready":"True"
	I0915 07:14:49.363607 2584312 pod_ready.go:82] duration metric: took 4.029999119s for pod "kube-proxy-hwpmv" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:49.363618 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxkq4" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:49.363688 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxkq4
	I0915 07:14:49.363693 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:49.363701 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:49.363705 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:49.378679 2584312 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0915 07:14:49.379503 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:14:49.379555 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:49.379584 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:49.379606 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:49.382938 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:49.383679 2584312 pod_ready.go:98] node "ha-985632-m04" hosting pod "kube-proxy-kxkq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-985632-m04" has status "Ready":"Unknown"
	I0915 07:14:49.383737 2584312 pod_ready.go:82] duration metric: took 20.110591ms for pod "kube-proxy-kxkq4" in "kube-system" namespace to be "Ready" ...
	E0915 07:14:49.383761 2584312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-985632-m04" hosting pod "kube-proxy-kxkq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-985632-m04" has status "Ready":"Unknown"
	I0915 07:14:49.383786 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:49.383902 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-985632
	I0915 07:14:49.383926 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:49.383969 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:49.383989 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:49.386962 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:49.388148 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:14:49.388197 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:49.388237 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:49.388262 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:49.391585 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:49.392218 2584312 pod_ready.go:93] pod "kube-scheduler-ha-985632" in "kube-system" namespace has status "Ready":"True"
	I0915 07:14:49.392267 2584312 pod_ready.go:82] duration metric: took 8.441969ms for pod "kube-scheduler-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:49.392294 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:49.392393 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-985632-m02
	I0915 07:14:49.392426 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:49.392448 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:49.392469 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:49.395403 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:14:49.396093 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:14:49.396133 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:49.396173 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:49.396197 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:49.399378 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:14:49.400617 2584312 pod_ready.go:93] pod "kube-scheduler-ha-985632-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:14:49.400673 2584312 pod_ready.go:82] duration metric: took 8.358418ms for pod "kube-scheduler-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:49.400714 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-985632-m03" in "kube-system" namespace to be "Ready" ...
	I0915 07:14:49.400835 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-985632-m03
	I0915 07:14:49.400875 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:49.400903 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:49.400924 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:49.403757 2584312 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0915 07:14:49.404151 2584312 pod_ready.go:98] error getting pod "kube-scheduler-ha-985632-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-985632-m03" not found
	I0915 07:14:49.404193 2584312 pod_ready.go:82] duration metric: took 3.452954ms for pod "kube-scheduler-ha-985632-m03" in "kube-system" namespace to be "Ready" ...
	E0915 07:14:49.404231 2584312 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-985632-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-985632-m03" not found
	I0915 07:14:49.404260 2584312 pod_ready.go:39] duration metric: took 56.852118318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:14:49.404292 2584312 api_server.go:52] waiting for apiserver process to appear ...
	I0915 07:14:49.404353 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 07:14:49.404444 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 07:14:49.461342 2584312 cri.go:89] found id: "5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a"
	I0915 07:14:49.461364 2584312 cri.go:89] found id: "94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c"
	I0915 07:14:49.461369 2584312 cri.go:89] found id: ""
	I0915 07:14:49.461376 2584312 logs.go:276] 2 containers: [5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a 94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c]
	I0915 07:14:49.461432 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:49.465233 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:49.468855 2584312 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 07:14:49.468943 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 07:14:49.511370 2584312 cri.go:89] found id: "acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead"
	I0915 07:14:49.511393 2584312 cri.go:89] found id: "60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585"
	I0915 07:14:49.511398 2584312 cri.go:89] found id: ""
	I0915 07:14:49.511405 2584312 logs.go:276] 2 containers: [acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead 60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585]
	I0915 07:14:49.511463 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:49.515025 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:49.518399 2584312 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 07:14:49.518473 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 07:14:49.559146 2584312 cri.go:89] found id: ""
	I0915 07:14:49.559177 2584312 logs.go:276] 0 containers: []
	W0915 07:14:49.559187 2584312 logs.go:278] No container was found matching "coredns"
	I0915 07:14:49.559222 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 07:14:49.559301 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 07:14:49.599310 2584312 cri.go:89] found id: "db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4"
	I0915 07:14:49.599333 2584312 cri.go:89] found id: "6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e"
	I0915 07:14:49.599338 2584312 cri.go:89] found id: ""
	I0915 07:14:49.599345 2584312 logs.go:276] 2 containers: [db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4 6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e]
	I0915 07:14:49.599402 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:49.603736 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:49.607381 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 07:14:49.607510 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 07:14:49.644602 2584312 cri.go:89] found id: "3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9"
	I0915 07:14:49.644664 2584312 cri.go:89] found id: ""
	I0915 07:14:49.644688 2584312 logs.go:276] 1 containers: [3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9]
	I0915 07:14:49.644761 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:49.649234 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 07:14:49.649392 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 07:14:49.693128 2584312 cri.go:89] found id: "6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657"
	I0915 07:14:49.693187 2584312 cri.go:89] found id: "863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2"
	I0915 07:14:49.693192 2584312 cri.go:89] found id: ""
	I0915 07:14:49.693199 2584312 logs.go:276] 2 containers: [6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657 863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2]
	I0915 07:14:49.693319 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:49.697791 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:49.701652 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 07:14:49.701727 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 07:14:49.745119 2584312 cri.go:89] found id: "46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248"
	I0915 07:14:49.745143 2584312 cri.go:89] found id: ""
	I0915 07:14:49.745151 2584312 logs.go:276] 1 containers: [46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248]
	I0915 07:14:49.745209 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:49.748864 2584312 logs.go:123] Gathering logs for kube-scheduler [6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e] ...
	I0915 07:14:49.748892 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e"
	I0915 07:14:49.788546 2584312 logs.go:123] Gathering logs for kubelet ...
	I0915 07:14:49.788618 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 07:14:49.865546 2584312 logs.go:123] Gathering logs for dmesg ...
	I0915 07:14:49.865585 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 07:14:49.883842 2584312 logs.go:123] Gathering logs for kube-apiserver [94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c] ...
	I0915 07:14:49.883873 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c"
	I0915 07:14:49.926257 2584312 logs.go:123] Gathering logs for kube-controller-manager [863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2] ...
	I0915 07:14:49.926286 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2"
	I0915 07:14:49.980740 2584312 logs.go:123] Gathering logs for describe nodes ...
	I0915 07:14:49.980773 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 07:14:50.763167 2584312 logs.go:123] Gathering logs for kube-apiserver [5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a] ...
	I0915 07:14:50.763203 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a"
	I0915 07:14:50.817896 2584312 logs.go:123] Gathering logs for kube-scheduler [db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4] ...
	I0915 07:14:50.817930 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4"
	I0915 07:14:50.858484 2584312 logs.go:123] Gathering logs for kindnet [46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248] ...
	I0915 07:14:50.858518 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248"
	I0915 07:14:50.910935 2584312 logs.go:123] Gathering logs for CRI-O ...
	I0915 07:14:50.910965 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 07:14:50.985801 2584312 logs.go:123] Gathering logs for etcd [acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead] ...
	I0915 07:14:50.985842 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead"
	I0915 07:14:51.045843 2584312 logs.go:123] Gathering logs for etcd [60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585] ...
	I0915 07:14:51.045925 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585"
	I0915 07:14:51.105786 2584312 logs.go:123] Gathering logs for kube-controller-manager [6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657] ...
	I0915 07:14:51.105826 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657"
	I0915 07:14:51.171417 2584312 logs.go:123] Gathering logs for kube-proxy [3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9] ...
	I0915 07:14:51.171453 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9"
	I0915 07:14:51.214132 2584312 logs.go:123] Gathering logs for container status ...
	I0915 07:14:51.214160 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 07:14:53.762929 2584312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:14:53.776069 2584312 api_server.go:72] duration metric: took 1m9.969629393s to wait for apiserver process to appear ...
	I0915 07:14:53.776094 2584312 api_server.go:88] waiting for apiserver healthz status ...
	I0915 07:14:53.776159 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 07:14:53.776233 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 07:14:53.821322 2584312 cri.go:89] found id: "5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a"
	I0915 07:14:53.821402 2584312 cri.go:89] found id: "94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c"
	I0915 07:14:53.821414 2584312 cri.go:89] found id: ""
	I0915 07:14:53.821428 2584312 logs.go:276] 2 containers: [5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a 94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c]
	I0915 07:14:53.821491 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:53.825593 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:53.829291 2584312 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 07:14:53.829409 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 07:14:53.871593 2584312 cri.go:89] found id: "acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead"
	I0915 07:14:53.871632 2584312 cri.go:89] found id: "60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585"
	I0915 07:14:53.871638 2584312 cri.go:89] found id: ""
	I0915 07:14:53.871646 2584312 logs.go:276] 2 containers: [acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead 60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585]
	I0915 07:14:53.871714 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:53.875563 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:53.879422 2584312 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 07:14:53.879507 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 07:14:53.926018 2584312 cri.go:89] found id: ""
	I0915 07:14:53.926041 2584312 logs.go:276] 0 containers: []
	W0915 07:14:53.926052 2584312 logs.go:278] No container was found matching "coredns"
	I0915 07:14:53.926059 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 07:14:53.926119 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 07:14:53.964099 2584312 cri.go:89] found id: "db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4"
	I0915 07:14:53.964120 2584312 cri.go:89] found id: "6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e"
	I0915 07:14:53.964125 2584312 cri.go:89] found id: ""
	I0915 07:14:53.964133 2584312 logs.go:276] 2 containers: [db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4 6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e]
	I0915 07:14:53.964196 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:53.968012 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:53.971873 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 07:14:53.971974 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 07:14:54.020385 2584312 cri.go:89] found id: "3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9"
	I0915 07:14:54.020421 2584312 cri.go:89] found id: ""
	I0915 07:14:54.020429 2584312 logs.go:276] 1 containers: [3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9]
	I0915 07:14:54.020536 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:54.025798 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 07:14:54.025915 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 07:14:54.074326 2584312 cri.go:89] found id: "6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657"
	I0915 07:14:54.074349 2584312 cri.go:89] found id: "863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2"
	I0915 07:14:54.074355 2584312 cri.go:89] found id: ""
	I0915 07:14:54.074362 2584312 logs.go:276] 2 containers: [6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657 863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2]
	I0915 07:14:54.074441 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:54.078581 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:54.082572 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 07:14:54.082688 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 07:14:54.121437 2584312 cri.go:89] found id: "46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248"
	I0915 07:14:54.121469 2584312 cri.go:89] found id: ""
	I0915 07:14:54.121484 2584312 logs.go:276] 1 containers: [46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248]
	I0915 07:14:54.121548 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:54.125501 2584312 logs.go:123] Gathering logs for kube-scheduler [db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4] ...
	I0915 07:14:54.125526 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4"
	I0915 07:14:54.178541 2584312 logs.go:123] Gathering logs for dmesg ...
	I0915 07:14:54.178625 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 07:14:54.198007 2584312 logs.go:123] Gathering logs for describe nodes ...
	I0915 07:14:54.198037 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 07:14:54.464798 2584312 logs.go:123] Gathering logs for etcd [60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585] ...
	I0915 07:14:54.464883 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585"
	I0915 07:14:54.539854 2584312 logs.go:123] Gathering logs for kube-scheduler [6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e] ...
	I0915 07:14:54.539892 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e"
	I0915 07:14:54.581647 2584312 logs.go:123] Gathering logs for CRI-O ...
	I0915 07:14:54.581676 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 07:14:54.652192 2584312 logs.go:123] Gathering logs for container status ...
	I0915 07:14:54.652233 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 07:14:54.715788 2584312 logs.go:123] Gathering logs for etcd [acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead] ...
	I0915 07:14:54.715820 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead"
	I0915 07:14:54.782370 2584312 logs.go:123] Gathering logs for kube-controller-manager [863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2] ...
	I0915 07:14:54.782407 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2"
	I0915 07:14:54.821392 2584312 logs.go:123] Gathering logs for kube-apiserver [94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c] ...
	I0915 07:14:54.821421 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c"
	I0915 07:14:54.862307 2584312 logs.go:123] Gathering logs for kube-proxy [3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9] ...
	I0915 07:14:54.862345 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9"
	I0915 07:14:54.903088 2584312 logs.go:123] Gathering logs for kube-controller-manager [6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657] ...
	I0915 07:14:54.903118 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657"
	I0915 07:14:54.989023 2584312 logs.go:123] Gathering logs for kindnet [46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248] ...
	I0915 07:14:54.989054 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248"
	I0915 07:14:55.043198 2584312 logs.go:123] Gathering logs for kubelet ...
	I0915 07:14:55.043381 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 07:14:55.124310 2584312 logs.go:123] Gathering logs for kube-apiserver [5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a] ...
	I0915 07:14:55.124351 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a"
	I0915 07:14:57.681881 2584312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 07:14:57.691522 2584312 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 07:14:57.691602 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0915 07:14:57.691611 2584312 round_trippers.go:469] Request Headers:
	I0915 07:14:57.691621 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:14:57.691629 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:14:57.692612 2584312 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0915 07:14:57.692749 2584312 api_server.go:141] control plane version: v1.31.1
	I0915 07:14:57.692769 2584312 api_server.go:131] duration metric: took 3.916667496s to wait for apiserver health ...
	I0915 07:14:57.692778 2584312 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 07:14:57.692799 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 07:14:57.692893 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 07:14:57.740715 2584312 cri.go:89] found id: "5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a"
	I0915 07:14:57.740740 2584312 cri.go:89] found id: "94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c"
	I0915 07:14:57.740745 2584312 cri.go:89] found id: ""
	I0915 07:14:57.740753 2584312 logs.go:276] 2 containers: [5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a 94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c]
	I0915 07:14:57.740842 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:57.744601 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:57.748561 2584312 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 07:14:57.748645 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 07:14:57.789275 2584312 cri.go:89] found id: "acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead"
	I0915 07:14:57.789310 2584312 cri.go:89] found id: "60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585"
	I0915 07:14:57.789320 2584312 cri.go:89] found id: ""
	I0915 07:14:57.789332 2584312 logs.go:276] 2 containers: [acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead 60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585]
	I0915 07:14:57.789536 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:57.793636 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:57.797139 2584312 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 07:14:57.797220 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 07:14:57.845106 2584312 cri.go:89] found id: ""
	I0915 07:14:57.845131 2584312 logs.go:276] 0 containers: []
	W0915 07:14:57.845149 2584312 logs.go:278] No container was found matching "coredns"
	I0915 07:14:57.845157 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 07:14:57.845218 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 07:14:57.884584 2584312 cri.go:89] found id: "db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4"
	I0915 07:14:57.884606 2584312 cri.go:89] found id: "6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e"
	I0915 07:14:57.884610 2584312 cri.go:89] found id: ""
	I0915 07:14:57.884617 2584312 logs.go:276] 2 containers: [db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4 6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e]
	I0915 07:14:57.884680 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:57.888283 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:57.891860 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 07:14:57.891941 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 07:14:57.930270 2584312 cri.go:89] found id: "3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9"
	I0915 07:14:57.930294 2584312 cri.go:89] found id: ""
	I0915 07:14:57.930302 2584312 logs.go:276] 1 containers: [3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9]
	I0915 07:14:57.930390 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:57.934592 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 07:14:57.934706 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 07:14:57.976365 2584312 cri.go:89] found id: "6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657"
	I0915 07:14:57.976401 2584312 cri.go:89] found id: "863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2"
	I0915 07:14:57.976406 2584312 cri.go:89] found id: ""
	I0915 07:14:57.976414 2584312 logs.go:276] 2 containers: [6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657 863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2]
	I0915 07:14:57.976480 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:57.980229 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:57.984009 2584312 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 07:14:57.984105 2584312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 07:14:58.029781 2584312 cri.go:89] found id: "46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248"
	I0915 07:14:58.029850 2584312 cri.go:89] found id: ""
	I0915 07:14:58.029873 2584312 logs.go:276] 1 containers: [46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248]
	I0915 07:14:58.029958 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:14:58.033815 2584312 logs.go:123] Gathering logs for etcd [60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585] ...
	I0915 07:14:58.033844 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60f03bb0269b05ed87c0ff07619122355380ecb98ea9dd54d7c55b66c6786585"
	I0915 07:14:58.097796 2584312 logs.go:123] Gathering logs for kube-scheduler [db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4] ...
	I0915 07:14:58.097835 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db0afb91b7c13a03614f9a93d47c27694378f0555af7f725f236456a0e80ada4"
	I0915 07:14:58.138664 2584312 logs.go:123] Gathering logs for kube-proxy [3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9] ...
	I0915 07:14:58.138694 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b06d7eafef8a38c39ba07435109393c3e6d5064c2247dbde5df8a8f662ad1c9"
	I0915 07:14:58.180973 2584312 logs.go:123] Gathering logs for kube-controller-manager [6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657] ...
	I0915 07:14:58.181006 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b430fdcbd74d6110db00ad590ee66c463c352a43331febfa45f2987c7737657"
	I0915 07:14:58.255740 2584312 logs.go:123] Gathering logs for CRI-O ...
	I0915 07:14:58.255782 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 07:14:58.335017 2584312 logs.go:123] Gathering logs for kubelet ...
	I0915 07:14:58.335054 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 07:14:58.419480 2584312 logs.go:123] Gathering logs for describe nodes ...
	I0915 07:14:58.419526 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 07:14:58.682585 2584312 logs.go:123] Gathering logs for kube-apiserver [94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c] ...
	I0915 07:14:58.682628 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94c38c0c0d9e6d4d86c977e9f5db7747da316b460663069f90513f3ba0e1825c"
	I0915 07:14:58.726435 2584312 logs.go:123] Gathering logs for container status ...
	I0915 07:14:58.726463 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 07:14:58.787951 2584312 logs.go:123] Gathering logs for kube-scheduler [6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e] ...
	I0915 07:14:58.787981 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f96efe7e62fbec2bc4f3b77f7660bb4b4b2b8b49cc53555a1af36c7d68bba8e"
	I0915 07:14:58.840598 2584312 logs.go:123] Gathering logs for kindnet [46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248] ...
	I0915 07:14:58.840627 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46690c372f7fe5ead779612e9b1948e43b78d6450c8f2faa39e5cf266fc00248"
	I0915 07:14:58.881276 2584312 logs.go:123] Gathering logs for kube-apiserver [5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a] ...
	I0915 07:14:58.881305 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5344cd779d74ca3419e0d2128ebdcc95aec2048aeddbf5e1625023f79e06e65a"
	I0915 07:14:58.951129 2584312 logs.go:123] Gathering logs for etcd [acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead] ...
	I0915 07:14:58.951163 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acd1fd00dbeb03af513a7c25b35570871882f0d965d6ba49dfef1c2b8b689ead"
	I0915 07:14:59.026071 2584312 logs.go:123] Gathering logs for kube-controller-manager [863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2] ...
	I0915 07:14:59.026108 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 863aea8223bc9c238968c78e3ce5f01971552405f957cfa8aec5f26094e765a2"
	I0915 07:14:59.080124 2584312 logs.go:123] Gathering logs for dmesg ...
	I0915 07:14:59.080199 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 07:15:01.598569 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0915 07:15:01.598597 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:01.598607 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:01.598610 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:01.604241 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:15:01.611630 2584312 system_pods.go:59] 19 kube-system pods found
	I0915 07:15:01.611676 2584312 system_pods.go:61] "coredns-7c65d6cfc9-fr4vw" [2c51f24e-62f4-4d54-ae17-07bd5906b0ff] Running
	I0915 07:15:01.611684 2584312 system_pods.go:61] "coredns-7c65d6cfc9-l2k54" [a101fed6-a598-4ae0-bd3d-405d69d55924] Running
	I0915 07:15:01.611690 2584312 system_pods.go:61] "etcd-ha-985632" [cb4271c2-e3ce-421e-b7f1-3897f92c617b] Running
	I0915 07:15:01.611694 2584312 system_pods.go:61] "etcd-ha-985632-m02" [2f474540-9140-4a89-a01b-985a3b523827] Running
	I0915 07:15:01.611699 2584312 system_pods.go:61] "kindnet-2f5fz" [499a9251-0388-4fdc-b520-366abcc00ad4] Running
	I0915 07:15:01.611704 2584312 system_pods.go:61] "kindnet-frm9q" [331f83ea-e1bf-47ba-ade0-850abb74ebdd] Running
	I0915 07:15:01.611708 2584312 system_pods.go:61] "kindnet-rcz7x" [19fa1dfb-9f39-46cf-8ba1-172183442b02] Running
	I0915 07:15:01.611716 2584312 system_pods.go:61] "kube-apiserver-ha-985632" [939491a1-5104-4698-b4cf-7c02d26a0abf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0915 07:15:01.611731 2584312 system_pods.go:61] "kube-apiserver-ha-985632-m02" [79b8e2ec-acc3-4283-822a-9e901089d1ad] Running
	I0915 07:15:01.611749 2584312 system_pods.go:61] "kube-controller-manager-ha-985632" [b005def9-272c-49c9-bbba-7fbae2ddbe0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0915 07:15:01.611767 2584312 system_pods.go:61] "kube-controller-manager-ha-985632-m02" [efc02497-f63b-4942-9516-baeab38141e9] Running
	I0915 07:15:01.611772 2584312 system_pods.go:61] "kube-proxy-5fsgj" [42c2fc5c-e87e-4c55-9d43-56d9ce8f9fba] Running
	I0915 07:15:01.611779 2584312 system_pods.go:61] "kube-proxy-hwpmv" [5496abbc-4af1-4815-885d-3a83720f5da5] Running
	I0915 07:15:01.611789 2584312 system_pods.go:61] "kube-proxy-kxkq4" [2d6740e8-303f-4341-9709-7cf07f95e677] Running
	I0915 07:15:01.611795 2584312 system_pods.go:61] "kube-scheduler-ha-985632" [abbc5af1-326a-4942-9dec-71709a5191bf] Running
	I0915 07:15:01.611802 2584312 system_pods.go:61] "kube-scheduler-ha-985632-m02" [4471a56a-407f-4c67-9a64-f06fcc93febb] Running
	I0915 07:15:01.611813 2584312 system_pods.go:61] "kube-vip-ha-985632" [0698325e-c6a6-413a-809f-57b5c3be149c] Running
	I0915 07:15:01.611817 2584312 system_pods.go:61] "kube-vip-ha-985632-m02" [c70ebdf3-2b0f-4588-9695-d1c624f753a3] Running
	I0915 07:15:01.611823 2584312 system_pods.go:61] "storage-provisioner" [ca802c89-9957-4c6b-b9de-5e3adfbfb8ad] Running
	I0915 07:15:01.611830 2584312 system_pods.go:74] duration metric: took 3.919046374s to wait for pod list to return data ...
	I0915 07:15:01.611843 2584312 default_sa.go:34] waiting for default service account to be created ...
	I0915 07:15:01.611931 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0915 07:15:01.611942 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:01.611952 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:01.611956 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:01.616299 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:15:01.616612 2584312 default_sa.go:45] found service account: "default"
	I0915 07:15:01.616632 2584312 default_sa.go:55] duration metric: took 4.781184ms for default service account to be created ...
	I0915 07:15:01.616641 2584312 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 07:15:01.616746 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0915 07:15:01.616769 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:01.616785 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:01.616791 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:01.622052 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:15:01.629888 2584312 system_pods.go:86] 19 kube-system pods found
	I0915 07:15:01.629930 2584312 system_pods.go:89] "coredns-7c65d6cfc9-fr4vw" [2c51f24e-62f4-4d54-ae17-07bd5906b0ff] Running
	I0915 07:15:01.629938 2584312 system_pods.go:89] "coredns-7c65d6cfc9-l2k54" [a101fed6-a598-4ae0-bd3d-405d69d55924] Running
	I0915 07:15:01.629944 2584312 system_pods.go:89] "etcd-ha-985632" [cb4271c2-e3ce-421e-b7f1-3897f92c617b] Running
	I0915 07:15:01.629949 2584312 system_pods.go:89] "etcd-ha-985632-m02" [2f474540-9140-4a89-a01b-985a3b523827] Running
	I0915 07:15:01.629954 2584312 system_pods.go:89] "kindnet-2f5fz" [499a9251-0388-4fdc-b520-366abcc00ad4] Running
	I0915 07:15:01.629961 2584312 system_pods.go:89] "kindnet-frm9q" [331f83ea-e1bf-47ba-ade0-850abb74ebdd] Running
	I0915 07:15:01.629967 2584312 system_pods.go:89] "kindnet-rcz7x" [19fa1dfb-9f39-46cf-8ba1-172183442b02] Running
	I0915 07:15:01.629975 2584312 system_pods.go:89] "kube-apiserver-ha-985632" [939491a1-5104-4698-b4cf-7c02d26a0abf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0915 07:15:01.629982 2584312 system_pods.go:89] "kube-apiserver-ha-985632-m02" [79b8e2ec-acc3-4283-822a-9e901089d1ad] Running
	I0915 07:15:01.629991 2584312 system_pods.go:89] "kube-controller-manager-ha-985632" [b005def9-272c-49c9-bbba-7fbae2ddbe0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0915 07:15:01.629997 2584312 system_pods.go:89] "kube-controller-manager-ha-985632-m02" [efc02497-f63b-4942-9516-baeab38141e9] Running
	I0915 07:15:01.630002 2584312 system_pods.go:89] "kube-proxy-5fsgj" [42c2fc5c-e87e-4c55-9d43-56d9ce8f9fba] Running
	I0915 07:15:01.630007 2584312 system_pods.go:89] "kube-proxy-hwpmv" [5496abbc-4af1-4815-885d-3a83720f5da5] Running
	I0915 07:15:01.630012 2584312 system_pods.go:89] "kube-proxy-kxkq4" [2d6740e8-303f-4341-9709-7cf07f95e677] Running
	I0915 07:15:01.630016 2584312 system_pods.go:89] "kube-scheduler-ha-985632" [abbc5af1-326a-4942-9dec-71709a5191bf] Running
	I0915 07:15:01.630021 2584312 system_pods.go:89] "kube-scheduler-ha-985632-m02" [4471a56a-407f-4c67-9a64-f06fcc93febb] Running
	I0915 07:15:01.630025 2584312 system_pods.go:89] "kube-vip-ha-985632" [0698325e-c6a6-413a-809f-57b5c3be149c] Running
	I0915 07:15:01.630028 2584312 system_pods.go:89] "kube-vip-ha-985632-m02" [c70ebdf3-2b0f-4588-9695-d1c624f753a3] Running
	I0915 07:15:01.630034 2584312 system_pods.go:89] "storage-provisioner" [ca802c89-9957-4c6b-b9de-5e3adfbfb8ad] Running
	I0915 07:15:01.630043 2584312 system_pods.go:126] duration metric: took 13.394694ms to wait for k8s-apps to be running ...
	I0915 07:15:01.630051 2584312 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 07:15:01.630116 2584312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:15:01.648621 2584312 system_svc.go:56] duration metric: took 18.559228ms WaitForService to wait for kubelet
	I0915 07:15:01.648653 2584312 kubeadm.go:582] duration metric: took 1m17.842218764s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:15:01.648674 2584312 node_conditions.go:102] verifying NodePressure condition ...
	I0915 07:15:01.648758 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0915 07:15:01.648769 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:01.648778 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:01.648783 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:01.654503 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:15:01.655980 2584312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 07:15:01.656031 2584312 node_conditions.go:123] node cpu capacity is 2
	I0915 07:15:01.656052 2584312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 07:15:01.656060 2584312 node_conditions.go:123] node cpu capacity is 2
	I0915 07:15:01.656064 2584312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 07:15:01.656071 2584312 node_conditions.go:123] node cpu capacity is 2
	I0915 07:15:01.656078 2584312 node_conditions.go:105] duration metric: took 7.396932ms to run NodePressure ...
	I0915 07:15:01.656092 2584312 start.go:241] waiting for startup goroutines ...
	I0915 07:15:01.656119 2584312 start.go:255] writing updated cluster config ...
	I0915 07:15:01.659417 2584312 out.go:201] 
	I0915 07:15:01.662578 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:15:01.662716 2584312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/config.json ...
	I0915 07:15:01.665894 2584312 out.go:177] * Starting "ha-985632-m04" worker node in "ha-985632" cluster
	I0915 07:15:01.669302 2584312 cache.go:121] Beginning downloading kic base image for docker with crio
	I0915 07:15:01.672313 2584312 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 07:15:01.674986 2584312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 07:15:01.675024 2584312 cache.go:56] Caching tarball of preloaded images
	I0915 07:15:01.675085 2584312 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 07:15:01.675147 2584312 preload.go:172] Found /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0915 07:15:01.675160 2584312 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 07:15:01.675289 2584312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/config.json ...
	W0915 07:15:01.694493 2584312 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0915 07:15:01.694521 2584312 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 07:15:01.694605 2584312 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 07:15:01.694629 2584312 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 07:15:01.694638 2584312 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 07:15:01.694647 2584312 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 07:15:01.694658 2584312 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 07:15:01.696160 2584312 image.go:273] response: 
	I0915 07:15:01.823748 2584312 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 07:15:01.823790 2584312 cache.go:194] Successfully downloaded all kic artifacts
	I0915 07:15:01.823824 2584312 start.go:360] acquireMachinesLock for ha-985632-m04: {Name:mk43ca43ae69ba5392e48e863eca20be17f6d89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 07:15:01.823913 2584312 start.go:364] duration metric: took 50.468µs to acquireMachinesLock for "ha-985632-m04"
	I0915 07:15:01.823961 2584312 start.go:96] Skipping create...Using existing machine configuration
	I0915 07:15:01.823968 2584312 fix.go:54] fixHost starting: m04
	I0915 07:15:01.824265 2584312 cli_runner.go:164] Run: docker container inspect ha-985632-m04 --format={{.State.Status}}
	I0915 07:15:01.853178 2584312 fix.go:112] recreateIfNeeded on ha-985632-m04: state=Stopped err=<nil>
	W0915 07:15:01.853212 2584312 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 07:15:01.856566 2584312 out.go:177] * Restarting existing docker container for "ha-985632-m04" ...
	I0915 07:15:01.859344 2584312 cli_runner.go:164] Run: docker start ha-985632-m04
	I0915 07:15:02.195653 2584312 cli_runner.go:164] Run: docker container inspect ha-985632-m04 --format={{.State.Status}}
	I0915 07:15:02.222343 2584312 kic.go:430] container "ha-985632-m04" state is running.
	I0915 07:15:02.222730 2584312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632-m04
	I0915 07:15:02.251442 2584312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/config.json ...
	I0915 07:15:02.251821 2584312 machine.go:93] provisionDockerMachine start ...
	I0915 07:15:02.251893 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m04
	I0915 07:15:02.274696 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:15:02.274936 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35818 <nil> <nil>}
	I0915 07:15:02.274946 2584312 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 07:15:02.276028 2584312 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0915 07:15:05.428301 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-985632-m04
	
	I0915 07:15:05.428325 2584312 ubuntu.go:169] provisioning hostname "ha-985632-m04"
	I0915 07:15:05.428394 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m04
	I0915 07:15:05.448554 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:15:05.449005 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35818 <nil> <nil>}
	I0915 07:15:05.449024 2584312 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-985632-m04 && echo "ha-985632-m04" | sudo tee /etc/hostname
	I0915 07:15:05.599445 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-985632-m04
	
	I0915 07:15:05.599531 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m04
	I0915 07:15:05.618937 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:15:05.619213 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35818 <nil> <nil>}
	I0915 07:15:05.619232 2584312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-985632-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-985632-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-985632-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 07:15:05.757248 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 07:15:05.757281 2584312 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-2517725/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-2517725/.minikube}
	I0915 07:15:05.757299 2584312 ubuntu.go:177] setting up certificates
	I0915 07:15:05.757309 2584312 provision.go:84] configureAuth start
	I0915 07:15:05.757378 2584312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632-m04
	I0915 07:15:05.774399 2584312 provision.go:143] copyHostCerts
	I0915 07:15:05.774453 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem
	I0915 07:15:05.774494 2584312 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem, removing ...
	I0915 07:15:05.774504 2584312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem
	I0915 07:15:05.774603 2584312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.pem (1082 bytes)
	I0915 07:15:05.774696 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem
	I0915 07:15:05.774718 2584312 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem, removing ...
	I0915 07:15:05.774723 2584312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem
	I0915 07:15:05.774750 2584312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/cert.pem (1123 bytes)
	I0915 07:15:05.774796 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem
	I0915 07:15:05.774818 2584312 exec_runner.go:144] found /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem, removing ...
	I0915 07:15:05.774827 2584312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem
	I0915 07:15:05.774853 2584312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-2517725/.minikube/key.pem (1675 bytes)
	I0915 07:15:05.774906 2584312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem org=jenkins.ha-985632-m04 san=[127.0.0.1 192.168.49.5 ha-985632-m04 localhost minikube]
	I0915 07:15:05.951604 2584312 provision.go:177] copyRemoteCerts
	I0915 07:15:05.951685 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 07:15:05.951731 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m04
	I0915 07:15:05.972632 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35818 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m04/id_rsa Username:docker}
	I0915 07:15:06.086742 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0915 07:15:06.086820 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 07:15:06.121016 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0915 07:15:06.121080 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 07:15:06.153336 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0915 07:15:06.153461 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 07:15:06.181141 2584312 provision.go:87] duration metric: took 423.811894ms to configureAuth
	I0915 07:15:06.181215 2584312 ubuntu.go:193] setting minikube options for container-runtime
	I0915 07:15:06.181511 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:15:06.181654 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m04
	I0915 07:15:06.201011 2584312 main.go:141] libmachine: Using SSH client type: native
	I0915 07:15:06.201264 2584312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35818 <nil> <nil>}
	I0915 07:15:06.201285 2584312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 07:15:06.485290 2584312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 07:15:06.485378 2584312 machine.go:96] duration metric: took 4.233542657s to provisionDockerMachine
	I0915 07:15:06.485414 2584312 start.go:293] postStartSetup for "ha-985632-m04" (driver="docker")
	I0915 07:15:06.485459 2584312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 07:15:06.485559 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 07:15:06.485625 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m04
	I0915 07:15:06.504183 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35818 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m04/id_rsa Username:docker}
	I0915 07:15:06.606315 2584312 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 07:15:06.610270 2584312 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 07:15:06.610304 2584312 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 07:15:06.610315 2584312 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 07:15:06.610322 2584312 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 07:15:06.610333 2584312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/addons for local assets ...
	I0915 07:15:06.610398 2584312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-2517725/.minikube/files for local assets ...
	I0915 07:15:06.610473 2584312 filesync.go:149] local asset: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem -> 25231162.pem in /etc/ssl/certs
	I0915 07:15:06.610480 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem -> /etc/ssl/certs/25231162.pem
	I0915 07:15:06.610582 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 07:15:06.620658 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem --> /etc/ssl/certs/25231162.pem (1708 bytes)
	I0915 07:15:06.649299 2584312 start.go:296] duration metric: took 163.840831ms for postStartSetup
	I0915 07:15:06.649394 2584312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:15:06.649444 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m04
	I0915 07:15:06.667254 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35818 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m04/id_rsa Username:docker}
	I0915 07:15:06.762236 2584312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 07:15:06.767597 2584312 fix.go:56] duration metric: took 4.943619398s for fixHost
	I0915 07:15:06.767676 2584312 start.go:83] releasing machines lock for "ha-985632-m04", held for 4.943728089s
	I0915 07:15:06.767790 2584312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632-m04
	I0915 07:15:06.789706 2584312 out.go:177] * Found network options:
	I0915 07:15:06.792458 2584312 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0915 07:15:06.795221 2584312 proxy.go:119] fail to check proxy env: Error ip not in block
	W0915 07:15:06.795261 2584312 proxy.go:119] fail to check proxy env: Error ip not in block
	W0915 07:15:06.795288 2584312 proxy.go:119] fail to check proxy env: Error ip not in block
	W0915 07:15:06.795302 2584312 proxy.go:119] fail to check proxy env: Error ip not in block
	I0915 07:15:06.795391 2584312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 07:15:06.795441 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m04
	I0915 07:15:06.795731 2584312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 07:15:06.795805 2584312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m04
	I0915 07:15:06.825658 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35818 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m04/id_rsa Username:docker}
	I0915 07:15:06.826393 2584312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35818 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m04/id_rsa Username:docker}
	I0915 07:15:07.147422 2584312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 07:15:07.152967 2584312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:15:07.162947 2584312 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0915 07:15:07.163069 2584312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 07:15:07.172560 2584312 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0915 07:15:07.172629 2584312 start.go:495] detecting cgroup driver to use...
	I0915 07:15:07.172676 2584312 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 07:15:07.172753 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 07:15:07.189454 2584312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 07:15:07.202854 2584312 docker.go:217] disabling cri-docker service (if available) ...
	I0915 07:15:07.202960 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 07:15:07.218004 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 07:15:07.231545 2584312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 07:15:07.383722 2584312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 07:15:07.524777 2584312 docker.go:233] disabling docker service ...
	I0915 07:15:07.524967 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 07:15:07.541756 2584312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 07:15:07.555887 2584312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 07:15:07.701525 2584312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 07:15:07.827017 2584312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 07:15:07.850267 2584312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 07:15:07.870468 2584312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 07:15:07.870590 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:15:07.881913 2584312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 07:15:07.882039 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:15:07.893507 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:15:07.904673 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:15:07.915715 2584312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 07:15:07.925989 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:15:07.938552 2584312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:15:07.949198 2584312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 07:15:07.964585 2584312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 07:15:07.974595 2584312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 07:15:07.984369 2584312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:15:08.111524 2584312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 07:15:08.297665 2584312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 07:15:08.297756 2584312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 07:15:08.305771 2584312 start.go:563] Will wait 60s for crictl version
	I0915 07:15:08.305857 2584312 ssh_runner.go:195] Run: which crictl
	I0915 07:15:08.310641 2584312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 07:15:08.366570 2584312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0915 07:15:08.366674 2584312 ssh_runner.go:195] Run: crio --version
	I0915 07:15:08.435251 2584312 ssh_runner.go:195] Run: crio --version
	I0915 07:15:08.496321 2584312 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0915 07:15:08.498957 2584312 out.go:177]   - env NO_PROXY=192.168.49.2
	I0915 07:15:08.501616 2584312 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0915 07:15:08.504378 2584312 cli_runner.go:164] Run: docker network inspect ha-985632 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 07:15:08.523850 2584312 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 07:15:08.527938 2584312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:15:08.541360 2584312 mustload.go:65] Loading cluster: ha-985632
	I0915 07:15:08.541608 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:15:08.541870 2584312 cli_runner.go:164] Run: docker container inspect ha-985632 --format={{.State.Status}}
	I0915 07:15:08.560744 2584312 host.go:66] Checking if "ha-985632" exists ...
	I0915 07:15:08.561138 2584312 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632 for IP: 192.168.49.5
	I0915 07:15:08.561156 2584312 certs.go:194] generating shared ca certs ...
	I0915 07:15:08.561172 2584312 certs.go:226] acquiring lock for ca certs: {Name:mk5e6b4b1562ab546f1aa57699f236200f49d7e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 07:15:08.561295 2584312 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key
	I0915 07:15:08.561341 2584312 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key
	I0915 07:15:08.561352 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 07:15:08.561366 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0915 07:15:08.561385 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 07:15:08.561401 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 07:15:08.561454 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116.pem (1338 bytes)
	W0915 07:15:08.561490 2584312 certs.go:480] ignoring /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116_empty.pem, impossibly tiny 0 bytes
	I0915 07:15:08.561503 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 07:15:08.561529 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/ca.pem (1082 bytes)
	I0915 07:15:08.561571 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/cert.pem (1123 bytes)
	I0915 07:15:08.561598 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/key.pem (1675 bytes)
	I0915 07:15:08.561646 2584312 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem (1708 bytes)
	I0915 07:15:08.561679 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem -> /usr/share/ca-certificates/25231162.pem
	I0915 07:15:08.561698 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:15:08.561709 2584312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116.pem -> /usr/share/ca-certificates/2523116.pem
	I0915 07:15:08.561733 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 07:15:08.592624 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 07:15:08.633808 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 07:15:08.666321 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 07:15:08.694464 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/ssl/certs/25231162.pem --> /usr/share/ca-certificates/25231162.pem (1708 bytes)
	I0915 07:15:08.725169 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 07:15:08.755437 2584312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-2517725/.minikube/certs/2523116.pem --> /usr/share/ca-certificates/2523116.pem (1338 bytes)
	I0915 07:15:08.793747 2584312 ssh_runner.go:195] Run: openssl version
	I0915 07:15:08.800196 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25231162.pem && ln -fs /usr/share/ca-certificates/25231162.pem /etc/ssl/certs/25231162.pem"
	I0915 07:15:08.813472 2584312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25231162.pem
	I0915 07:15:08.819141 2584312 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:58 /usr/share/ca-certificates/25231162.pem
	I0915 07:15:08.819206 2584312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25231162.pem
	I0915 07:15:08.828088 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25231162.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 07:15:08.838749 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 07:15:08.851256 2584312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:15:08.855431 2584312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:38 /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:15:08.855554 2584312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 07:15:08.863496 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 07:15:08.874396 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2523116.pem && ln -fs /usr/share/ca-certificates/2523116.pem /etc/ssl/certs/2523116.pem"
	I0915 07:15:08.885271 2584312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2523116.pem
	I0915 07:15:08.890710 2584312 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:58 /usr/share/ca-certificates/2523116.pem
	I0915 07:15:08.890850 2584312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2523116.pem
	I0915 07:15:08.899695 2584312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2523116.pem /etc/ssl/certs/51391683.0"
	I0915 07:15:08.910951 2584312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 07:15:08.916550 2584312 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 07:15:08.916647 2584312 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I0915 07:15:08.916764 2584312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-985632-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-985632 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 07:15:08.916911 2584312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 07:15:08.926890 2584312 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 07:15:08.927011 2584312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0915 07:15:08.937846 2584312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0915 07:15:08.958423 2584312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 07:15:08.980370 2584312 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0915 07:15:08.984558 2584312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 07:15:08.996317 2584312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:15:09.172891 2584312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:15:09.193666 2584312 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0915 07:15:09.194077 2584312 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:15:09.197031 2584312 out.go:177] * Verifying Kubernetes components...
	I0915 07:15:09.199738 2584312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 07:15:09.338860 2584312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 07:15:09.366535 2584312 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 07:15:09.366799 2584312 kapi.go:59] client config for ha-985632: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/client.crt", KeyFile:"/home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/ha-985632/client.key", CAFile:"/home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1e6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0915 07:15:09.366858 2584312 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0915 07:15:09.367070 2584312 node_ready.go:35] waiting up to 6m0s for node "ha-985632-m04" to be "Ready" ...
	I0915 07:15:09.367147 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:09.367152 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:09.367160 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:09.367165 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:09.370296 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:09.867972 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:09.868038 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:09.868062 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:09.868085 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:09.875717 2584312 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 07:15:10.367340 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:10.367369 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:10.367380 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:10.367385 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:10.370483 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:10.867353 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:10.867380 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:10.867390 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:10.867401 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:10.870447 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:11.367530 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:11.367549 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:11.367559 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:11.367565 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:11.370583 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:11.371671 2584312 node_ready.go:53] node "ha-985632-m04" has status "Ready":"Unknown"
	I0915 07:15:11.867302 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:11.867329 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:11.867346 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:11.867351 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:11.870402 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:12.367979 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:12.368013 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:12.368027 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:12.368033 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:12.371333 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:12.867476 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:12.867513 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:12.867526 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:12.867534 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:12.870904 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:13.367299 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:13.367326 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:13.367337 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:13.367343 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:13.370468 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:13.867343 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:13.867369 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:13.867380 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:13.867384 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:13.870445 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:13.870979 2584312 node_ready.go:53] node "ha-985632-m04" has status "Ready":"Unknown"
	I0915 07:15:14.367279 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:14.367319 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:14.367330 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:14.367334 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:14.372682 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:15:14.867922 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:14.867945 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:14.867955 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:14.867959 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:14.870832 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:15.367888 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:15.367914 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:15.367925 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:15.367930 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:15.371304 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:15.867898 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:15.867921 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:15.867931 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:15.867935 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:15.870824 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:15.871338 2584312 node_ready.go:53] node "ha-985632-m04" has status "Ready":"Unknown"
	I0915 07:15:16.368114 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:16.368138 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.368148 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.368152 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.371330 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:16.372313 2584312 node_ready.go:49] node "ha-985632-m04" has status "Ready":"True"
	I0915 07:15:16.372340 2584312 node_ready.go:38] duration metric: took 7.005257382s for node "ha-985632-m04" to be "Ready" ...
	I0915 07:15:16.372350 2584312 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:15:16.372425 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0915 07:15:16.372437 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.372446 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.372452 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.377606 2584312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 07:15:16.385044 2584312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fr4vw" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:16.385183 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fr4vw
	I0915 07:15:16.385203 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.385213 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.385217 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.388353 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:16.389091 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:16.389104 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.389114 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.389118 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.391678 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:16.392565 2584312 pod_ready.go:93] pod "coredns-7c65d6cfc9-fr4vw" in "kube-system" namespace has status "Ready":"True"
	I0915 07:15:16.392587 2584312 pod_ready.go:82] duration metric: took 7.511341ms for pod "coredns-7c65d6cfc9-fr4vw" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:16.392600 2584312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l2k54" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:16.392668 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-l2k54
	I0915 07:15:16.392681 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.392690 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.392701 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.395566 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:16.396330 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:16.396351 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.396362 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.396366 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.404475 2584312 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 07:15:16.405555 2584312 pod_ready.go:93] pod "coredns-7c65d6cfc9-l2k54" in "kube-system" namespace has status "Ready":"True"
	I0915 07:15:16.405582 2584312 pod_ready.go:82] duration metric: took 12.974527ms for pod "coredns-7c65d6cfc9-l2k54" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:16.405594 2584312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:16.405663 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-985632
	I0915 07:15:16.405674 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.405683 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.405688 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.408687 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:16.409663 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:16.409686 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.409696 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.409700 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.412465 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:16.413508 2584312 pod_ready.go:93] pod "etcd-ha-985632" in "kube-system" namespace has status "Ready":"True"
	I0915 07:15:16.413532 2584312 pod_ready.go:82] duration metric: took 7.931278ms for pod "etcd-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:16.413545 2584312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:16.413651 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-985632-m02
	I0915 07:15:16.413660 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.413669 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.413676 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.416538 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:16.417322 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:15:16.417376 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.417393 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.417405 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.419994 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:16.420598 2584312 pod_ready.go:93] pod "etcd-ha-985632-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:15:16.420619 2584312 pod_ready.go:82] duration metric: took 7.045015ms for pod "etcd-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:16.420640 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:16.568995 2584312 request.go:632] Waited for 148.282144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:16.569084 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:16.569095 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.569104 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.569109 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.572523 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:16.768650 2584312 request.go:632] Waited for 195.314585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:16.768738 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:16.768794 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.768831 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.768843 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.771699 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:16.969070 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:16.969103 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:16.969117 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:16.969155 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:16.972434 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:17.168712 2584312 request.go:632] Waited for 195.352893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:17.168910 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:17.168925 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:17.168934 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:17.168939 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:17.171761 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:17.421658 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:17.421681 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:17.421691 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:17.421696 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:17.424877 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:17.568248 2584312 request.go:632] Waited for 142.208644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:17.568366 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:17.568380 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:17.568390 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:17.568400 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:17.571306 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:17.920851 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:17.920876 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:17.920886 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:17.920892 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:17.923844 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:17.968451 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:17.968549 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:17.968566 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:17.968572 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:17.972065 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:18.421015 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:18.421040 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:18.421049 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:18.421054 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:18.424071 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:18.425333 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:18.425357 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:18.425366 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:18.425370 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:18.428361 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:18.429055 2584312 pod_ready.go:103] pod "kube-apiserver-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:15:18.921123 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:18.921153 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:18.921168 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:18.921173 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:18.924052 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:18.925331 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:18.925356 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:18.925366 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:18.925373 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:18.928154 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:19.421001 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:19.421026 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:19.421036 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:19.421044 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:19.424062 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:19.424941 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:19.424959 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:19.424969 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:19.424975 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:19.427929 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:19.921415 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:19.921441 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:19.921450 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:19.921454 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:19.924902 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:19.925869 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:19.925883 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:19.925897 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:19.925902 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:19.929216 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:20.421830 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:20.421857 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:20.421866 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:20.421870 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:20.424950 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:20.425963 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:20.425985 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:20.425994 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:20.425998 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:20.429069 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:20.429788 2584312 pod_ready.go:103] pod "kube-apiserver-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:15:20.921540 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:20.921562 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:20.921592 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:20.921598 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:20.924588 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:20.925390 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:20.925407 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:20.925416 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:20.925419 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:20.928163 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:21.421568 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:21.421594 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:21.421604 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:21.421609 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:21.431437 2584312 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0915 07:15:21.432484 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:21.432511 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:21.432521 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:21.432526 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:21.443705 2584312 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0915 07:15:21.921000 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:21.921025 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:21.921036 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:21.921040 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:21.924179 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:21.925578 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:21.925601 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:21.925611 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:21.925616 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:21.928530 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:22.421039 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:22.421063 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:22.421074 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:22.421080 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:22.424243 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:22.425370 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:22.425394 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:22.425404 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:22.425409 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:22.428165 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:22.921507 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:22.921528 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:22.921538 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:22.921542 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:22.925256 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:22.926426 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:22.926450 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:22.926461 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:22.926466 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:22.929413 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:22.930297 2584312 pod_ready.go:103] pod "kube-apiserver-ha-985632" in "kube-system" namespace has status "Ready":"False"
	I0915 07:15:23.420984 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:23.421005 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:23.421015 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:23.421022 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:23.423933 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:23.424960 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:23.424980 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:23.424988 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:23.424994 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:23.427721 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:23.921054 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:23.921082 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:23.921092 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:23.921096 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:23.924180 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:23.924970 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:23.924989 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:23.924998 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:23.925003 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:23.927611 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:24.421368 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:24.421396 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:24.421407 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:24.421412 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:24.424288 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:24.425480 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:24.425503 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:24.425513 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:24.425516 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:24.428150 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:24.921702 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:24.921730 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:24.921738 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:24.921743 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:24.925067 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:24.926167 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:24.926186 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:24.926197 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:24.926202 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:24.929167 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:25.421047 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632
	I0915 07:15:25.421067 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.421076 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.421080 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.423835 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:25.424953 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:25.424973 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.424996 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.425001 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.427728 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:25.428453 2584312 pod_ready.go:98] node "ha-985632" hosting pod "kube-apiserver-ha-985632" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-985632" has status "Ready":"Unknown"
	I0915 07:15:25.428486 2584312 pod_ready.go:82] duration metric: took 9.007833801s for pod "kube-apiserver-ha-985632" in "kube-system" namespace to be "Ready" ...
	E0915 07:15:25.428498 2584312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-985632" hosting pod "kube-apiserver-ha-985632" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-985632" has status "Ready":"Unknown"
	I0915 07:15:25.428509 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:25.428587 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632-m02
	I0915 07:15:25.428597 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.428604 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.428608 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.431808 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:25.432962 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:15:25.433032 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.433043 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.433047 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.435747 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:25.436442 2584312 pod_ready.go:93] pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:15:25.436471 2584312 pod_ready.go:82] duration metric: took 7.951659ms for pod "kube-apiserver-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:25.436493 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:25.436578 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632
	I0915 07:15:25.436583 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.436591 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.436594 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.439591 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:25.440705 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:25.440726 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.440736 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.440740 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.443603 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:25.444256 2584312 pod_ready.go:98] node "ha-985632" hosting pod "kube-controller-manager-ha-985632" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-985632" has status "Ready":"Unknown"
	I0915 07:15:25.444282 2584312 pod_ready.go:82] duration metric: took 7.779987ms for pod "kube-controller-manager-ha-985632" in "kube-system" namespace to be "Ready" ...
	E0915 07:15:25.444293 2584312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-985632" hosting pod "kube-controller-manager-ha-985632" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-985632" has status "Ready":"Unknown"
	I0915 07:15:25.444301 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:25.444374 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-985632-m02
	I0915 07:15:25.444384 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.444392 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.444396 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.448673 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:15:25.449741 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:15:25.449774 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.449784 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.449788 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.452445 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:25.453086 2584312 pod_ready.go:93] pod "kube-controller-manager-ha-985632-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:15:25.453110 2584312 pod_ready.go:82] duration metric: took 8.79762ms for pod "kube-controller-manager-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:25.453122 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5fsgj" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:25.453186 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5fsgj
	I0915 07:15:25.453196 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.453205 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.453208 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.455967 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:25.569021 2584312 request.go:632] Waited for 112.236391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:25.569081 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:25.569087 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.569097 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.569105 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.572150 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:25.577619 2584312 pod_ready.go:98] node "ha-985632" hosting pod "kube-proxy-5fsgj" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-985632" has status "Ready":"Unknown"
	I0915 07:15:25.577649 2584312 pod_ready.go:82] duration metric: took 124.519424ms for pod "kube-proxy-5fsgj" in "kube-system" namespace to be "Ready" ...
	E0915 07:15:25.577659 2584312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-985632" hosting pod "kube-proxy-5fsgj" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-985632" has status "Ready":"Unknown"
	I0915 07:15:25.577696 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hwpmv" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:25.769107 2584312 request.go:632] Waited for 191.335932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hwpmv
	I0915 07:15:25.769168 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hwpmv
	I0915 07:15:25.769178 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.769187 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.769191 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.772487 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:25.968510 2584312 request.go:632] Waited for 195.192347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:15:25.968567 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:15:25.968580 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:25.968590 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:25.968596 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:25.971631 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:25.972264 2584312 pod_ready.go:93] pod "kube-proxy-hwpmv" in "kube-system" namespace has status "Ready":"True"
	I0915 07:15:25.972285 2584312 pod_ready.go:82] duration metric: took 394.574262ms for pod "kube-proxy-hwpmv" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:25.972296 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxkq4" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:26.168185 2584312 request.go:632] Waited for 195.792046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxkq4
	I0915 07:15:26.168291 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxkq4
	I0915 07:15:26.168306 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:26.168315 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:26.168320 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:26.171243 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:26.368178 2584312 request.go:632] Waited for 196.291701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:26.368287 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m04
	I0915 07:15:26.368301 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:26.368310 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:26.368314 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:26.371282 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:26.371991 2584312 pod_ready.go:93] pod "kube-proxy-kxkq4" in "kube-system" namespace has status "Ready":"True"
	I0915 07:15:26.372012 2584312 pod_ready.go:82] duration metric: took 399.706322ms for pod "kube-proxy-kxkq4" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:26.372025 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-985632" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:26.568540 2584312 request.go:632] Waited for 196.426786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-985632
	I0915 07:15:26.568622 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-985632
	I0915 07:15:26.568651 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:26.568664 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:26.568669 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:26.574732 2584312 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:15:26.769094 2584312 request.go:632] Waited for 193.340011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:26.769177 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632
	I0915 07:15:26.769190 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:26.769201 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:26.769205 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:26.771981 2584312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0915 07:15:26.772654 2584312 pod_ready.go:98] node "ha-985632" hosting pod "kube-scheduler-ha-985632" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-985632" has status "Ready":"Unknown"
	I0915 07:15:26.772678 2584312 pod_ready.go:82] duration metric: took 400.646368ms for pod "kube-scheduler-ha-985632" in "kube-system" namespace to be "Ready" ...
	E0915 07:15:26.772687 2584312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-985632" hosting pod "kube-scheduler-ha-985632" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-985632" has status "Ready":"Unknown"
	I0915 07:15:26.772719 2584312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:26.968439 2584312 request.go:632] Waited for 195.601092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-985632-m02
	I0915 07:15:26.968513 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-985632-m02
	I0915 07:15:26.968525 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:26.968562 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:26.968573 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:26.973064 2584312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 07:15:27.169200 2584312 request.go:632] Waited for 195.39249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:15:27.169267 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-985632-m02
	I0915 07:15:27.169287 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:27.169299 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:27.169327 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:27.172467 2584312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 07:15:27.173070 2584312 pod_ready.go:93] pod "kube-scheduler-ha-985632-m02" in "kube-system" namespace has status "Ready":"True"
	I0915 07:15:27.173089 2584312 pod_ready.go:82] duration metric: took 400.360501ms for pod "kube-scheduler-ha-985632-m02" in "kube-system" namespace to be "Ready" ...
	I0915 07:15:27.173118 2584312 pod_ready.go:39] duration metric: took 10.800755985s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 07:15:27.173142 2584312 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 07:15:27.173214 2584312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:15:27.186122 2584312 system_svc.go:56] duration metric: took 12.971746ms WaitForService to wait for kubelet
	I0915 07:15:27.186200 2584312 kubeadm.go:582] duration metric: took 17.99209049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 07:15:27.186240 2584312 node_conditions.go:102] verifying NodePressure condition ...
	I0915 07:15:27.368554 2584312 request.go:632] Waited for 182.223858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0915 07:15:27.368616 2584312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0915 07:15:27.368622 2584312 round_trippers.go:469] Request Headers:
	I0915 07:15:27.368631 2584312 round_trippers.go:473]     Accept: application/json, */*
	I0915 07:15:27.368646 2584312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0915 07:15:27.374821 2584312 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 07:15:27.376498 2584312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 07:15:27.376535 2584312 node_conditions.go:123] node cpu capacity is 2
	I0915 07:15:27.376552 2584312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 07:15:27.376557 2584312 node_conditions.go:123] node cpu capacity is 2
	I0915 07:15:27.376582 2584312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 07:15:27.376595 2584312 node_conditions.go:123] node cpu capacity is 2
	I0915 07:15:27.376601 2584312 node_conditions.go:105] duration metric: took 190.34028ms to run NodePressure ...
	I0915 07:15:27.376615 2584312 start.go:241] waiting for startup goroutines ...
	I0915 07:15:27.376658 2584312 start.go:255] writing updated cluster config ...
	I0915 07:15:27.377092 2584312 ssh_runner.go:195] Run: rm -f paused
	I0915 07:15:27.445042 2584312 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 07:15:27.450056 2584312 out.go:177] * Done! kubectl is now configured to use "ha-985632" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 07:14:46 ha-985632 crio[641]: time="2024-09-15 07:14:46.217172603Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 15 07:14:46 ha-985632 crio[641]: time="2024-09-15 07:14:46.303247474Z" level=info msg="Created container a42240521b6672bd6a73f8b372246928cb4bbe1bdbfa3193352bce512fb92ef8: kube-system/kube-apiserver-ha-985632/kube-apiserver" id=e5e50551-a61f-43e3-b0e1-50b5615e93cc name=/runtime.v1.RuntimeService/CreateContainer
	Sep 15 07:14:46 ha-985632 crio[641]: time="2024-09-15 07:14:46.303905411Z" level=info msg="Starting container: a42240521b6672bd6a73f8b372246928cb4bbe1bdbfa3193352bce512fb92ef8" id=096401dc-87f4-4ac6-82a8-bb183b01cbed name=/runtime.v1.RuntimeService/StartContainer
	Sep 15 07:14:46 ha-985632 crio[641]: time="2024-09-15 07:14:46.311800365Z" level=info msg="Started container" PID=1836 containerID=a42240521b6672bd6a73f8b372246928cb4bbe1bdbfa3193352bce512fb92ef8 description=kube-system/kube-apiserver-ha-985632/kube-apiserver id=096401dc-87f4-4ac6-82a8-bb183b01cbed name=/runtime.v1.RuntimeService/StartContainer sandboxID=916a2041587811974bd7b6938083581f3680ba478a03460582d37e45776bd908
	Sep 15 07:14:49 ha-985632 conmon[955]: conmon 813213a66782773f30a8 <ninfo>: container 975 exited with status 1
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.228608374Z" level=info msg="Checking image status: ghcr.io/kube-vip/kube-vip:v0.8.0" id=07598d63-6a6a-4f12-9b26-53e4d368aaf8 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.229408494Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kube-vip@sha256:6d75bc516a5ce412bd5b68e393f88a55d498448708a10b638fc48453ac98236e],Size_:48263643,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=07598d63-6a6a-4f12-9b26-53e4d368aaf8 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.235592754Z" level=info msg="Checking image status: ghcr.io/kube-vip/kube-vip:v0.8.0" id=afb21330-0211-41e9-bb6d-6e880c2597a1 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.235833971Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kube-vip@sha256:6d75bc516a5ce412bd5b68e393f88a55d498448708a10b638fc48453ac98236e],Size_:48263643,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=afb21330-0211-41e9-bb6d-6e880c2597a1 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.236898125Z" level=info msg="Creating container: kube-system/kube-vip-ha-985632/kube-vip" id=742f2925-12c1-4867-bfc8-09d6e76e03ae name=/runtime.v1.RuntimeService/CreateContainer
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.237012067Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.256989255Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9a82206f7b6e2004a2bd35de03077ece7939a54d9d36a79c05d87f5654ab9b49/merged/etc/passwd: no such file or directory"
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.257300558Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9a82206f7b6e2004a2bd35de03077ece7939a54d9d36a79c05d87f5654ab9b49/merged/etc/group: no such file or directory"
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.353936378Z" level=info msg="Created container 51cfa960a5fae8da10ff721914c19e4068678340d5882028088b2b2c1a795620: kube-system/kube-vip-ha-985632/kube-vip" id=742f2925-12c1-4867-bfc8-09d6e76e03ae name=/runtime.v1.RuntimeService/CreateContainer
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.354990432Z" level=info msg="Starting container: 51cfa960a5fae8da10ff721914c19e4068678340d5882028088b2b2c1a795620" id=9ef1da79-a50b-4335-be37-979539b87860 name=/runtime.v1.RuntimeService/StartContainer
	Sep 15 07:14:50 ha-985632 crio[641]: time="2024-09-15 07:14:50.370492890Z" level=info msg="Started container" PID=1887 containerID=51cfa960a5fae8da10ff721914c19e4068678340d5882028088b2b2c1a795620 description=kube-system/kube-vip-ha-985632/kube-vip id=9ef1da79-a50b-4335-be37-979539b87860 name=/runtime.v1.RuntimeService/StartContainer sandboxID=950e180645acb39986433298e805abb49533502a6b0fbdafab401b3309f573ca
	Sep 15 07:15:06 ha-985632 crio[641]: time="2024-09-15 07:15:06.934075011Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=db87c501-2f0f-41fb-a4b1-030d6202b696 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:15:06 ha-985632 crio[641]: time="2024-09-15 07:15:06.934291850Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=db87c501-2f0f-41fb-a4b1-030d6202b696 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:15:06 ha-985632 crio[641]: time="2024-09-15 07:15:06.935035914Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=eec5f9d6-d816-488e-93d6-d6e4b75ce837 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:15:06 ha-985632 crio[641]: time="2024-09-15 07:15:06.935224250Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=eec5f9d6-d816-488e-93d6-d6e4b75ce837 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:15:06 ha-985632 crio[641]: time="2024-09-15 07:15:06.935966878Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-985632/kube-controller-manager" id=9e16016f-8867-42eb-bf5d-8642bb68ab42 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 15 07:15:06 ha-985632 crio[641]: time="2024-09-15 07:15:06.936063262Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 15 07:15:07 ha-985632 crio[641]: time="2024-09-15 07:15:07.046613524Z" level=info msg="Created container 767c249ed8a031c5043ea297be1ddaf812fb7ae828394c457460b0d0d5fd8414: kube-system/kube-controller-manager-ha-985632/kube-controller-manager" id=9e16016f-8867-42eb-bf5d-8642bb68ab42 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 15 07:15:07 ha-985632 crio[641]: time="2024-09-15 07:15:07.047189354Z" level=info msg="Starting container: 767c249ed8a031c5043ea297be1ddaf812fb7ae828394c457460b0d0d5fd8414" id=b15976ce-60a2-4a79-9e94-925351bd6442 name=/runtime.v1.RuntimeService/StartContainer
	Sep 15 07:15:07 ha-985632 crio[641]: time="2024-09-15 07:15:07.060964084Z" level=info msg="Started container" PID=1936 containerID=767c249ed8a031c5043ea297be1ddaf812fb7ae828394c457460b0d0d5fd8414 description=kube-system/kube-controller-manager-ha-985632/kube-controller-manager id=b15976ce-60a2-4a79-9e94-925351bd6442 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e346e285b98e66c9b84bfd7c9a9f4c399c24cff40d0498cd6aecc74e44b6499
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	767c249ed8a03       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   22 seconds ago       Running             kube-controller-manager   9                   5e346e285b98e       kube-controller-manager-ha-985632
	51cfa960a5fae       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   39 seconds ago       Running             kube-vip                  3                   950e180645acb       kube-vip-ha-985632
	a42240521b667       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   43 seconds ago       Running             kube-apiserver            5                   916a204158781       kube-apiserver-ha-985632
	f330adc733848       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   57 seconds ago       Exited              kube-controller-manager   8                   5e346e285b98e       kube-controller-manager-ha-985632
	b84e8ccbb97c0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Running             storage-provisioner       4                   393dc7860bc6e       storage-provisioner
	4839bb964ab3f       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   5afadf79f1f83       busybox-7dff88458-h84wj
	7408f05306887       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   023c853fe4637       coredns-7c65d6cfc9-fr4vw
	7722a397e63dc       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   ebee814cf5828       kindnet-frm9q
	5af5e87142031       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       3                   393dc7860bc6e       storage-provisioner
	6d58918c45e10       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   About a minute ago   Running             kube-proxy                2                   4d8f8f3162c2a       kube-proxy-5fsgj
	b03ba4740caa0       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   d6a6e734be56d       coredns-7c65d6cfc9-l2k54
	813213a667827       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   About a minute ago   Exited              kube-vip                  2                   950e180645acb       kube-vip-ha-985632
	6c998643ce3d5       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Running             etcd                      2                   d7e3dcc801b0b       etcd-ha-985632
	66f56d702eb2d       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   About a minute ago   Exited              kube-apiserver            4                   916a204158781       kube-apiserver-ha-985632
	6292c7018e24c       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   About a minute ago   Running             kube-scheduler            2                   71d5f9bf261c9       kube-scheduler-ha-985632
	
	
	==> coredns [7408f053068871033b76c908815d215c044148dd91e678c5785f96d286804738] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47102 - 15442 "HINFO IN 4966829928990799316.1088956914093413613. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03582468s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[359201354]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 07:13:58.604) (total time: 30001ms):
	Trace[359201354]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (07:14:28.605)
	Trace[359201354]: [30.001288375s] [30.001288375s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1673202969]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 07:13:58.614) (total time: 30001ms):
	Trace[1673202969]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (07:14:28.615)
	Trace[1673202969]: [30.001541399s] [30.001541399s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1388618743]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 07:13:58.613) (total time: 30001ms):
	Trace[1388618743]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (07:14:28.614)
	Trace[1388618743]: [30.001815468s] [30.001815468s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [b03ba4740caa0a126944b3b270b57a94f0908e7c9aa92789e1f446f7286bdc2d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48536 - 57540 "HINFO IN 4616451304673594805.7214327839463048866. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016819029s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[731537519]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 07:13:58.663) (total time: 30001ms):
	Trace[731537519]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (07:14:28.664)
	Trace[731537519]: [30.001083005s] [30.001083005s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1545086753]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 07:13:58.663) (total time: 30001ms):
	Trace[1545086753]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (07:14:28.664)
	Trace[1545086753]: [30.001276905s] [30.001276905s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[722395685]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 07:13:58.663) (total time: 30001ms):
	Trace[722395685]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (07:14:28.664)
	Trace[722395685]: [30.001291903s] [30.001291903s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-985632
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-985632
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-985632
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T07_02_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:02:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-985632
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:14:40 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 15 Sep 2024 07:13:59 +0000   Sun, 15 Sep 2024 07:15:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 15 Sep 2024 07:13:59 +0000   Sun, 15 Sep 2024 07:15:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 15 Sep 2024 07:13:59 +0000   Sun, 15 Sep 2024 07:15:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 15 Sep 2024 07:13:59 +0000   Sun, 15 Sep 2024 07:15:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-985632
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 51e2c8d5099746e2a70af495d4df15fb
	  System UUID:                43b6aed6-b4a7-4acc-b435-54d241e88290
	  Boot ID:                    86c781ec-01d2-4efb-aba1-a43f302ac663
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h84wj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-fr4vw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 coredns-7c65d6cfc9-l2k54             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-ha-985632                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-frm9q                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-985632             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-985632    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-5fsgj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-985632             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-985632                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m38s                  kube-proxy       
	  Normal   Starting                 89s                    kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-985632 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-985632 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-985632 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-985632 event: Registered Node ha-985632 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-985632 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-985632 event: Registered Node ha-985632 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-985632 event: Registered Node ha-985632 in Controller
	  Normal   NodeHasSufficientPID     7m12s (x7 over 7m12s)  kubelet          Node ha-985632 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 7m12s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    7m12s (x8 over 7m12s)  kubelet          Node ha-985632 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m12s (x8 over 7m12s)  kubelet          Node ha-985632 status is now: NodeHasSufficientMemory
	  Normal   Starting                 7m12s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m14s                  node-controller  Node ha-985632 event: Registered Node ha-985632 in Controller
	  Normal   RegisteredNode           4m37s                  node-controller  Node ha-985632 event: Registered Node ha-985632 in Controller
	  Normal   RegisteredNode           3m33s                  node-controller  Node ha-985632 event: Registered Node ha-985632 in Controller
	  Normal   Starting                 119s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    118s (x8 over 119s)    kubelet          Node ha-985632 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  118s (x8 over 119s)    kubelet          Node ha-985632 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     118s (x7 over 119s)    kubelet          Node ha-985632 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           86s                    node-controller  Node ha-985632 event: Registered Node ha-985632 in Controller
	  Normal   RegisteredNode           19s                    node-controller  Node ha-985632 event: Registered Node ha-985632 in Controller
	  Normal   NodeNotReady             6s                     node-controller  Node ha-985632 status is now: NodeNotReady
	
	
	Name:               ha-985632-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-985632-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-985632
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_03_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:03:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-985632-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:15:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:13:54 +0000   Sun, 15 Sep 2024 07:07:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:13:54 +0000   Sun, 15 Sep 2024 07:07:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:13:54 +0000   Sun, 15 Sep 2024 07:07:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:13:54 +0000   Sun, 15 Sep 2024 07:07:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-985632-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb27ba2626d540f6a9062743a305f007
	  System UUID:                45ef1566-7d7e-4dfa-8b51-81a967bdafec
	  Boot ID:                    86c781ec-01d2-4efb-aba1-a43f302ac663
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-r4wpp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-985632-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-2f5fz                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-985632-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-985632-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-hwpmv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-985632-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-985632-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 78s                    kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 7m53s                  kube-proxy       
	  Normal   Starting                 5m4s                   kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-985632-m02 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-985632-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-985632-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-985632-m02 event: Registered Node ha-985632-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-985632-m02 event: Registered Node ha-985632-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-985632-m02 event: Registered Node ha-985632-m02 in Controller
	  Normal   NodeHasSufficientPID     8m35s (x7 over 8m35s)  kubelet          Node ha-985632-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m35s (x8 over 8m35s)  kubelet          Node ha-985632-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m35s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m35s (x8 over 8m35s)  kubelet          Node ha-985632-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             8m16s                  node-controller  Node ha-985632-m02 status is now: NodeNotReady
	  Normal   NodeHasSufficientMemory  7m9s (x8 over 7m9s)    kubelet          Node ha-985632-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     7m9s (x7 over 7m9s)    kubelet          Node ha-985632-m02 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 7m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    7m9s (x8 over 7m9s)    kubelet          Node ha-985632-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           5m14s                  node-controller  Node ha-985632-m02 event: Registered Node ha-985632-m02 in Controller
	  Normal   RegisteredNode           4m37s                  node-controller  Node ha-985632-m02 event: Registered Node ha-985632-m02 in Controller
	  Normal   RegisteredNode           3m33s                  node-controller  Node ha-985632-m02 event: Registered Node ha-985632-m02 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  116s (x8 over 117s)    kubelet          Node ha-985632-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    116s (x8 over 117s)    kubelet          Node ha-985632-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s (x7 over 117s)    kubelet          Node ha-985632-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           86s                    node-controller  Node ha-985632-m02 event: Registered Node ha-985632-m02 in Controller
	  Normal   RegisteredNode           19s                    node-controller  Node ha-985632-m02 event: Registered Node ha-985632-m02 in Controller
	
	
	Name:               ha-985632-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-985632-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=ha-985632
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_15T07_05_31_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 07:05:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-985632-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:15:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 07:15:15 +0000   Sun, 15 Sep 2024 07:15:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 07:15:15 +0000   Sun, 15 Sep 2024 07:15:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 07:15:15 +0000   Sun, 15 Sep 2024 07:15:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 07:15:15 +0000   Sun, 15 Sep 2024 07:15:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-985632-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 52c0550a9cc14015a253be1a9e63f655
	  System UUID:                6173ef06-b6ba-4c80-9374-5d5746d48430
	  Boot ID:                    86c781ec-01d2-4efb-aba1-a43f302ac663
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2d8k2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kindnet-rcz7x              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-kxkq4           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 8s                    kube-proxy       
	  Normal   Starting                 9m57s                 kube-proxy       
	  Normal   Starting                 2m47s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)     kubelet          Node ha-985632-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)     kubelet          Node ha-985632-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)     kubelet          Node ha-985632-m04 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 10m                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 10m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           9m58s                 node-controller  Node ha-985632-m04 event: Registered Node ha-985632-m04 in Controller
	  Normal   RegisteredNode           9m56s                 node-controller  Node ha-985632-m04 event: Registered Node ha-985632-m04 in Controller
	  Normal   RegisteredNode           9m56s                 node-controller  Node ha-985632-m04 event: Registered Node ha-985632-m04 in Controller
	  Normal   NodeReady                9m18s                 kubelet          Node ha-985632-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m14s                 node-controller  Node ha-985632-m04 event: Registered Node ha-985632-m04 in Controller
	  Normal   RegisteredNode           4m37s                 node-controller  Node ha-985632-m04 event: Registered Node ha-985632-m04 in Controller
	  Normal   NodeNotReady             4m34s                 node-controller  Node ha-985632-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m33s                 node-controller  Node ha-985632-m04 event: Registered Node ha-985632-m04 in Controller
	  Warning  CgroupV1                 3m8s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 3m8s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     3m2s (x7 over 3m8s)   kubelet          Node ha-985632-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m56s (x8 over 3m8s)  kubelet          Node ha-985632-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m56s (x8 over 3m8s)  kubelet          Node ha-985632-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           86s                   node-controller  Node ha-985632-m04 event: Registered Node ha-985632-m04 in Controller
	  Normal   NodeNotReady             46s                   node-controller  Node ha-985632-m04 status is now: NodeNotReady
	  Normal   Starting                 27s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 27s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     21s (x7 over 27s)     kubelet          Node ha-985632-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19s                   node-controller  Node ha-985632-m04 event: Registered Node ha-985632-m04 in Controller
	  Normal   NodeHasSufficientMemory  15s (x8 over 27s)     kubelet          Node ha-985632-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 27s)     kubelet          Node ha-985632-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[Sep15 05:34] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000091 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001089] FS-Cache: O-cookie d=000000009ec4a1b9{9P.session} n=00000000933e989b
	[  +0.001105] FS-Cache: O-key=[10] '34333036383438313233'
	[  +0.000796] FS-Cache: N-cookie c=00000092 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000965] FS-Cache: N-cookie d=000000009ec4a1b9{9P.session} n=00000000c50af53f
	[  +0.001363] FS-Cache: N-key=[10] '34333036383438313233'
	[Sep15 06:08] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [6c998643ce3d590ddef2376fa50627565dc57a9f96081c368d6abba76ecdb3a5] <==
	{"level":"info","ts":"2024-09-15T07:13:52.514304Z","caller":"traceutil/trace.go:171","msg":"trace[1208778112] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; }","duration":"9.675567635s","start":"2024-09-15T07:13:42.838731Z","end":"2024-09-15T07:13:52.514299Z","steps":["trace[1208778112] 'agreement among raft nodes before linearized reading'  (duration: 9.65788676s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T07:13:52.496840Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.800138908s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-15T07:13:52.514342Z","caller":"traceutil/trace.go:171","msg":"trace[1324798733] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; }","duration":"9.817642253s","start":"2024-09-15T07:13:42.696694Z","end":"2024-09-15T07:13:52.514336Z","steps":["trace[1324798733] 'agreement among raft nodes before linearized reading'  (duration: 9.800138104s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T07:13:52.498046Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"10.386436996s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-15T07:13:52.514495Z","caller":"traceutil/trace.go:171","msg":"trace[1274335738] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"10.402884662s","start":"2024-09-15T07:13:42.111604Z","end":"2024-09-15T07:13:52.514488Z","steps":["trace[1274335738] 'agreement among raft nodes before linearized reading'  (duration: 10.38643684s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T07:13:52.498199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"10.557459612s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-15T07:13:52.514537Z","caller":"traceutil/trace.go:171","msg":"trace[374988515] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; }","duration":"10.573798184s","start":"2024-09-15T07:13:41.940735Z","end":"2024-09-15T07:13:52.514533Z","steps":["trace[374988515] 'agreement among raft nodes before linearized reading'  (duration: 10.557459578s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T07:13:52.498354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"10.659800885s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-15T07:13:52.514567Z","caller":"traceutil/trace.go:171","msg":"trace[1195117841] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; }","duration":"10.676014874s","start":"2024-09-15T07:13:41.838549Z","end":"2024-09-15T07:13:52.514563Z","steps":["trace[1195117841] 'agreement among raft nodes before linearized reading'  (duration: 10.659801066s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.514682Z","caller":"traceutil/trace.go:171","msg":"trace[1988737666] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; }","duration":"7.08158956s","start":"2024-09-15T07:13:45.433087Z","end":"2024-09-15T07:13:52.514677Z","steps":["trace[1988737666] 'agreement among raft nodes before linearized reading'  (duration: 7.065337352s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.514726Z","caller":"traceutil/trace.go:171","msg":"trace[1337840187] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; }","duration":"10.110733538s","start":"2024-09-15T07:13:42.403988Z","end":"2024-09-15T07:13:52.514722Z","steps":["trace[1337840187] 'agreement among raft nodes before linearized reading'  (duration: 10.093647447s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.514758Z","caller":"traceutil/trace.go:171","msg":"trace[1325672378] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; }","duration":"7.066987446s","start":"2024-09-15T07:13:45.447766Z","end":"2024-09-15T07:13:52.514754Z","steps":["trace[1325672378] 'agreement among raft nodes before linearized reading'  (duration: 7.049501709s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.514806Z","caller":"traceutil/trace.go:171","msg":"trace[1774269735] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; }","duration":"9.614151512s","start":"2024-09-15T07:13:42.900650Z","end":"2024-09-15T07:13:52.514802Z","steps":["trace[1774269735] 'agreement among raft nodes before linearized reading'  (duration: 9.595861019s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.514840Z","caller":"traceutil/trace.go:171","msg":"trace[1569437010] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; }","duration":"9.702228058s","start":"2024-09-15T07:13:42.812608Z","end":"2024-09-15T07:13:52.514836Z","steps":["trace[1569437010] 'agreement among raft nodes before linearized reading'  (duration: 9.684025186s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.514871Z","caller":"traceutil/trace.go:171","msg":"trace[479129562] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; }","duration":"9.702411676s","start":"2024-09-15T07:13:42.812455Z","end":"2024-09-15T07:13:52.514867Z","steps":["trace[479129562] 'agreement among raft nodes before linearized reading'  (duration: 9.684190646s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.514912Z","caller":"traceutil/trace.go:171","msg":"trace[1745889949] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; }","duration":"9.818236904s","start":"2024-09-15T07:13:42.696661Z","end":"2024-09-15T07:13:52.514898Z","steps":["trace[1745889949] 'agreement among raft nodes before linearized reading'  (duration: 9.800190804s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.514973Z","caller":"traceutil/trace.go:171","msg":"trace[1957186142] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; }","duration":"10.155599877s","start":"2024-09-15T07:13:42.359361Z","end":"2024-09-15T07:13:52.514961Z","steps":["trace[1957186142] 'agreement among raft nodes before linearized reading'  (duration: 10.138286961s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.515009Z","caller":"traceutil/trace.go:171","msg":"trace[57909925] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"10.403431365s","start":"2024-09-15T07:13:42.111572Z","end":"2024-09-15T07:13:52.515003Z","steps":["trace[57909925] 'agreement among raft nodes before linearized reading'  (duration: 10.386481927s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.515032Z","caller":"traceutil/trace.go:171","msg":"trace[1129749481] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"10.445795471s","start":"2024-09-15T07:13:42.069233Z","end":"2024-09-15T07:13:52.515028Z","steps":["trace[1129749481] 'agreement among raft nodes before linearized reading'  (duration: 10.428834038s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.515074Z","caller":"traceutil/trace.go:171","msg":"trace[362022192] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; }","duration":"10.627448662s","start":"2024-09-15T07:13:41.887621Z","end":"2024-09-15T07:13:52.515070Z","steps":["trace[362022192] 'agreement among raft nodes before linearized reading'  (duration: 10.610585367s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.520221Z","caller":"traceutil/trace.go:171","msg":"trace[1884374099] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:2722; }","duration":"2.851481465s","start":"2024-09-15T07:13:49.668728Z","end":"2024-09-15T07:13:52.520210Z","steps":["trace[1884374099] 'agreement among raft nodes before linearized reading'  (duration: 2.851402706s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:52.520785Z","caller":"traceutil/trace.go:171","msg":"trace[1648187152] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2722; }","duration":"1.23449397s","start":"2024-09-15T07:13:51.286284Z","end":"2024-09-15T07:13:52.520778Z","steps":["trace[1648187152] 'agreement among raft nodes before linearized reading'  (duration: 1.23446618s)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T07:13:59.953635Z","caller":"traceutil/trace.go:171","msg":"trace[1260846832] linearizableReadLoop","detail":"{readStateIndex:3304; appliedIndex:3305; }","duration":"104.587471ms","start":"2024-09-15T07:13:59.849034Z","end":"2024-09-15T07:13:59.953621Z","steps":["trace[1260846832] 'read index received'  (duration: 104.58382ms)","trace[1260846832] 'applied index is now lower than readState.Index'  (duration: 3.044µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T07:13:59.954052Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.001197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-controller-manager-ha-985632.17f5597b773ed612\" ","response":"range_response_count:1 size:792"}
	{"level":"info","ts":"2024-09-15T07:13:59.954135Z","caller":"traceutil/trace.go:171","msg":"trace[2099797667] range","detail":"{range_begin:/registry/events/kube-system/kube-controller-manager-ha-985632.17f5597b773ed612; range_end:; response_count:1; response_revision:2812; }","duration":"105.0968ms","start":"2024-09-15T07:13:59.849028Z","end":"2024-09-15T07:13:59.954125Z","steps":["trace[2099797667] 'agreement among raft nodes before linearized reading'  (duration: 104.928541ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:15:30 up 14:58,  0 users,  load average: 2.71, 2.43, 1.90
	Linux ha-985632 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [7722a397e63dc9bb370ed15ba8e7df131f5a042f4dec7d427d95d179fbc5a81a] <==
	I0915 07:14:48.722681       1 main.go:322] Node ha-985632-m04 has CIDR [10.244.3.0/24] 
	I0915 07:14:58.722332       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:14:58.722502       1 main.go:299] handling current node
	I0915 07:14:58.722553       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0915 07:14:58.722591       1 main.go:322] Node ha-985632-m02 has CIDR [10.244.1.0/24] 
	I0915 07:14:58.722724       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0915 07:14:58.722766       1 main.go:322] Node ha-985632-m04 has CIDR [10.244.3.0/24] 
	I0915 07:15:08.729344       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:15:08.729385       1 main.go:299] handling current node
	I0915 07:15:08.729401       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0915 07:15:08.729407       1 main.go:322] Node ha-985632-m02 has CIDR [10.244.1.0/24] 
	I0915 07:15:08.729514       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0915 07:15:08.729528       1 main.go:322] Node ha-985632-m04 has CIDR [10.244.3.0/24] 
	I0915 07:15:18.722601       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:15:18.722649       1 main.go:299] handling current node
	I0915 07:15:18.722685       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0915 07:15:18.722692       1 main.go:322] Node ha-985632-m02 has CIDR [10.244.1.0/24] 
	I0915 07:15:18.722801       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0915 07:15:18.722817       1 main.go:322] Node ha-985632-m04 has CIDR [10.244.3.0/24] 
	I0915 07:15:28.723172       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:15:28.723208       1 main.go:299] handling current node
	I0915 07:15:28.723225       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0915 07:15:28.723232       1 main.go:322] Node ha-985632-m02 has CIDR [10.244.1.0/24] 
	I0915 07:15:28.723340       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0915 07:15:28.723353       1 main.go:322] Node ha-985632-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [66f56d702eb2d2725b7f19ae3ae9ae3911aeb392a8f82da8bf327450ebaf834d] <==
	W0915 07:13:52.544425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.FlowSchema: etcdserver: leader changed
	E0915 07:13:52.544477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.FlowSchema: failed to list *v1.FlowSchema: etcdserver: leader changed" logger="UnhandledError"
	I0915 07:13:53.346196       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 07:13:53.443390       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0915 07:13:53.545406       1 shared_informer.go:320] Caches are synced for configmaps
	W0915 07:13:53.648685       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0915 07:13:53.939742       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 07:13:53.939774       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 07:13:53.941249       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0915 07:13:53.941333       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0915 07:13:53.942009       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 07:13:53.947424       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 07:13:54.008480       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 07:13:54.008519       1 policy_source.go:224] refreshing policies
	I0915 07:13:54.042314       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 07:13:54.042450       1 aggregator.go:171] initial CRD sync complete...
	I0915 07:13:54.042474       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 07:13:54.042483       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 07:13:54.042489       1 cache.go:39] Caches are synced for autoregister controller
	I0915 07:13:54.052465       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 07:13:54.055901       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 07:13:54.062127       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0915 07:13:54.067156       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0915 07:13:54.105592       1 shared_informer.go:320] Caches are synced for node_authorizer
	F0915 07:14:45.339779       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [a42240521b6672bd6a73f8b372246928cb4bbe1bdbfa3193352bce512fb92ef8] <==
	I0915 07:14:49.334102       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0915 07:14:48.945817       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0915 07:14:48.945859       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0915 07:14:49.334297       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0915 07:14:49.334304       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 07:14:49.334335       1 aggregator.go:171] initial CRD sync complete...
	I0915 07:14:49.334347       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 07:14:49.334352       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 07:14:49.349277       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0915 07:14:49.358569       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 07:14:49.358670       1 policy_source.go:224] refreshing policies
	I0915 07:14:49.368726       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 07:14:49.434890       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0915 07:14:49.434968       1 cache.go:39] Caches are synced for autoregister controller
	I0915 07:14:49.435057       1 shared_informer.go:320] Caches are synced for configmaps
	I0915 07:14:49.435168       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 07:14:49.435199       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0915 07:14:49.439028       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0915 07:14:49.445182       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 07:14:49.446953       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 07:14:49.446975       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 07:14:49.959806       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0915 07:14:50.577077       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0915 07:14:50.578870       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 07:14:50.587793       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [767c249ed8a031c5043ea297be1ddaf812fb7ae828394c457460b0d0d5fd8414] <==
	I0915 07:15:15.953059       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-985632-m04"
	I0915 07:15:15.971792       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="82.016µs"
	I0915 07:15:16.472945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-985632-m04"
	I0915 07:15:21.113478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.643µs"
	I0915 07:15:21.131646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.591µs"
	I0915 07:15:21.291259       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="269.114µs"
	I0915 07:15:21.318159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.672µs"
	I0915 07:15:21.326017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.908µs"
	I0915 07:15:24.373923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.472µs"
	I0915 07:15:24.394014       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.048µs"
	I0915 07:15:24.971185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-985632"
	I0915 07:15:24.971233       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-985632-m04"
	I0915 07:15:24.993425       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-985632"
	I0915 07:15:25.072345       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-vznzc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-vznzc\": the object has been modified; please apply your changes to the latest version and try again"
	I0915 07:15:25.074012       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"eb210cac-cfb2-42d6-8a45-676a889fbeea", APIVersion:"v1", ResourceVersion:"246", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-vznzc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-vznzc": the object has been modified; please apply your changes to the latest version and try again
	I0915 07:15:25.141194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="135.013839ms"
	I0915 07:15:25.141499       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="145.005µs"
	I0915 07:15:25.192302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.496082ms"
	I0915 07:15:25.192401       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.322µs"
	I0915 07:15:25.271013       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.820213ms"
	I0915 07:15:25.271214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.515µs"
	I0915 07:15:25.287887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.49005ms"
	I0915 07:15:25.288058       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.48µs"
	I0915 07:15:26.576428       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-985632"
	I0915 07:15:30.422610       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-985632"
	
	
	==> kube-controller-manager [f330adc7338488f630e5156aa342fe4a7f98e9f8a90a8cc4e83e0a443db183a6] <==
	I0915 07:14:33.401622       1 serving.go:386] Generated self-signed cert in-memory
	I0915 07:14:34.960698       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0915 07:14:34.960736       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:14:34.962369       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0915 07:14:34.962546       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0915 07:14:34.962586       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 07:14:34.962564       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0915 07:14:44.980727       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [6d58918c45e1065d300c1180b637c3fe7c6ca2c51ce40ca5219428f8657011a1] <==
	I0915 07:13:59.125167       1 server_linux.go:66] "Using iptables proxy"
	I0915 07:14:00.301193       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 07:14:00.301386       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 07:14:00.416830       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 07:14:00.416911       1 server_linux.go:169] "Using iptables Proxier"
	I0915 07:14:00.419786       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 07:14:00.420350       1 server.go:483] "Version info" version="v1.31.1"
	I0915 07:14:00.420381       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 07:14:00.425895       1 config.go:199] "Starting service config controller"
	I0915 07:14:00.426011       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 07:14:00.426066       1 config.go:105] "Starting endpoint slice config controller"
	I0915 07:14:00.426098       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 07:14:00.426884       1 config.go:328] "Starting node config controller"
	I0915 07:14:00.432354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 07:14:00.527036       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 07:14:00.527117       1 shared_informer.go:320] Caches are synced for service config
	I0915 07:14:00.533617       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6292c7018e24c3562453416c6780bd075f679f4c81bbd4e681199ed127cda524] <==
	E0915 07:13:52.637071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 07:13:53.188358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 07:13:53.188413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:13:53.219401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 07:13:53.219532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 07:13:53.297420       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 07:13:53.297488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 07:13:53.757008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0915 07:13:53.757056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	I0915 07:13:54.815127       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0915 07:14:49.323597       1 event.go:359] "Server rejected event (will not retry!)" err="events \"busybox-7dff88458-2d8k2.17f5598ad5ed66dc\" is forbidden: User \"system:kube-scheduler\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{busybox-7dff88458-2d8k2.17f5598ad5ed66dc  default   3094 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:default,Name:busybox-7dff88458-2d8k2,UID:2d3cb6fc-9c55-44f1-a553-7d9792df052a,APIVersion:v1,ResourceVersion:3095,FieldPath:,},Reason:FailedScheduling,Message:0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.,Source:EventSource{Component:default-scheduler,Host:,},FirstTimestamp:2024-09-15 07:14:44 +0000 UTC,LastTimestamp:2024-
09-15 07:14:47.649577804 +0000 UTC m=+68.724110068,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:default-scheduler,ReportingInstance:,}"
	E0915 07:14:49.331096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:41132->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.331264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:41212->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.331346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:41184->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.333192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:41178->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.333363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:41166->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.333447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:41162->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.333528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:41146->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.333604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:41100->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.333674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:41108->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.335347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:41116->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.335474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:41220->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.335565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:41198->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.335648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:41188->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0915 07:14:49.335733       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:41084->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 15 07:14:45 ha-985632 kubelet[758]: E0915 07:14:45.214790     758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-985632_kube-system(fc48467c801661d02b4270fab26c6432)\"" pod="kube-system/kube-controller-manager-ha-985632" podUID="fc48467c801661d02b4270fab26c6432"
	Sep 15 07:14:46 ha-985632 kubelet[758]: I0915 07:14:46.214116     758 scope.go:117] "RemoveContainer" containerID="66f56d702eb2d2725b7f19ae3ae9ae3911aeb392a8f82da8bf327450ebaf834d"
	Sep 15 07:14:46 ha-985632 kubelet[758]: I0915 07:14:46.215488     758 status_manager.go:851] "Failed to get status for pod" podUID="3eb2fefa7c5a159861d1505189b873c2" pod="kube-system/kube-apiserver-ha-985632" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-985632\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Sep 15 07:14:46 ha-985632 kubelet[758]: E0915 07:14:46.217264     758 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-985632.17f5597b5b865b12\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-985632.17f5597b5b865b12  kube-system   2793 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-985632,UID:3eb2fefa7c5a159861d1505189b873c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.1\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-985632,},FirstTimestamp:2024-09-15 07:13:38 +0000 UTC,LastTimestamp:2024-09-15 07:14:46.215620518 +0000 UTC m=+74.491371528,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-985632,}"
	Sep 15 07:14:48 ha-985632 kubelet[758]: I0915 07:14:48.018021     758 scope.go:117] "RemoveContainer" containerID="f330adc7338488f630e5156aa342fe4a7f98e9f8a90a8cc4e83e0a443db183a6"
	Sep 15 07:14:48 ha-985632 kubelet[758]: E0915 07:14:48.018757     758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-985632_kube-system(fc48467c801661d02b4270fab26c6432)\"" pod="kube-system/kube-controller-manager-ha-985632" podUID="fc48467c801661d02b4270fab26c6432"
	Sep 15 07:14:49 ha-985632 kubelet[758]: E0915 07:14:49.105963     758 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:53256->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 15 07:14:49 ha-985632 kubelet[758]: E0915 07:14:49.106084     758 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:53258->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 15 07:14:49 ha-985632 kubelet[758]: E0915 07:14:49.106117     758 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:53286->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 15 07:14:49 ha-985632 kubelet[758]: E0915 07:14:49.106145     758 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:53284->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 15 07:14:50 ha-985632 kubelet[758]: I0915 07:14:50.227633     758 scope.go:117] "RemoveContainer" containerID="813213a66782773f30a88c8bbc7d6e4c05a32b47f4a413aa3d59cdbf0b6de748"
	Sep 15 07:14:51 ha-985632 kubelet[758]: E0915 07:14:51.947559     758 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384491947082005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:51 ha-985632 kubelet[758]: E0915 07:14:51.948265     758 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384491947082005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:14:54 ha-985632 kubelet[758]: I0915 07:14:54.830979     758 scope.go:117] "RemoveContainer" containerID="f330adc7338488f630e5156aa342fe4a7f98e9f8a90a8cc4e83e0a443db183a6"
	Sep 15 07:14:54 ha-985632 kubelet[758]: E0915 07:14:54.831150     758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-985632_kube-system(fc48467c801661d02b4270fab26c6432)\"" pod="kube-system/kube-controller-manager-ha-985632" podUID="fc48467c801661d02b4270fab26c6432"
	Sep 15 07:15:01 ha-985632 kubelet[758]: E0915 07:15:01.344465     758 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-985632?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 15 07:15:01 ha-985632 kubelet[758]: E0915 07:15:01.949765     758 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384501949387442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:01 ha-985632 kubelet[758]: E0915 07:15:01.949802     758 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384501949387442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:06 ha-985632 kubelet[758]: I0915 07:15:06.933450     758 scope.go:117] "RemoveContainer" containerID="f330adc7338488f630e5156aa342fe4a7f98e9f8a90a8cc4e83e0a443db183a6"
	Sep 15 07:15:11 ha-985632 kubelet[758]: E0915 07:15:11.345623     758 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-985632?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 15 07:15:11 ha-985632 kubelet[758]: E0915 07:15:11.951383     758 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384511951174727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:11 ha-985632 kubelet[758]: E0915 07:15:11.951684     758 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384511951174727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:21 ha-985632 kubelet[758]: E0915 07:15:21.345918     758 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-985632?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 15 07:15:21 ha-985632 kubelet[758]: E0915 07:15:21.953018     758 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384521952751299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:15:21 ha-985632 kubelet[758]: E0915 07:15:21.953056     758 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726384521952751299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-985632 -n ha-985632
helpers_test.go:261: (dbg) Run:  kubectl --context ha-985632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (127.36s)

                                                
                                    

Test pass (294/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.69
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 5.02
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.09
18 TestDownloadOnly/v1.31.1/DeleteAll 0.23
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 212.75
31 TestAddons/serial/GCPAuth/Namespaces 0.24
35 TestAddons/parallel/InspektorGadget 12.07
39 TestAddons/parallel/CSI 46.86
40 TestAddons/parallel/Headlamp 17.73
41 TestAddons/parallel/CloudSpanner 5.66
42 TestAddons/parallel/LocalPath 53.64
43 TestAddons/parallel/NvidiaDevicePlugin 6.53
44 TestAddons/parallel/Yakd 11.9
45 TestAddons/StoppedEnableDisable 6.27
46 TestCertOptions 32.29
47 TestCertExpiration 248.63
49 TestForceSystemdFlag 45.88
50 TestForceSystemdEnv 38.94
56 TestErrorSpam/setup 32.77
57 TestErrorSpam/start 0.79
58 TestErrorSpam/status 1.2
59 TestErrorSpam/pause 1.93
60 TestErrorSpam/unpause 1.88
61 TestErrorSpam/stop 1.47
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 82.3
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 26.77
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.45
73 TestFunctional/serial/CacheCmd/cache/add_local 1.42
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.22
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.15
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 42.16
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.81
84 TestFunctional/serial/LogsFileCmd 1.81
85 TestFunctional/serial/InvalidService 4.3
87 TestFunctional/parallel/ConfigCmd 0.46
88 TestFunctional/parallel/DashboardCmd 11.49
89 TestFunctional/parallel/DryRun 0.55
90 TestFunctional/parallel/InternationalLanguage 0.31
91 TestFunctional/parallel/StatusCmd 1.05
95 TestFunctional/parallel/ServiceCmdConnect 11.74
96 TestFunctional/parallel/AddonsCmd 0.24
97 TestFunctional/parallel/PersistentVolumeClaim 25
99 TestFunctional/parallel/SSHCmd 0.68
100 TestFunctional/parallel/CpCmd 2.41
102 TestFunctional/parallel/FileSync 0.41
103 TestFunctional/parallel/CertSync 2.12
107 TestFunctional/parallel/NodeLabels 0.14
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
111 TestFunctional/parallel/License 0.28
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.28
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
125 TestFunctional/parallel/ProfileCmd/profile_list 0.45
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
127 TestFunctional/parallel/MountCmd/any-port 9.39
128 TestFunctional/parallel/ServiceCmd/List 0.6
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
131 TestFunctional/parallel/ServiceCmd/Format 0.4
132 TestFunctional/parallel/ServiceCmd/URL 0.38
133 TestFunctional/parallel/MountCmd/specific-port 2.45
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.64
135 TestFunctional/parallel/Version/short 0.09
136 TestFunctional/parallel/Version/components 1.31
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
142 TestFunctional/parallel/ImageCommands/Setup 0.69
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.64
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.61
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 174.37
160 TestMultiControlPlane/serial/DeployApp 9.83
161 TestMultiControlPlane/serial/PingHostFromPods 1.81
162 TestMultiControlPlane/serial/AddWorkerNode 63.56
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
165 TestMultiControlPlane/serial/CopyFile 19.84
166 TestMultiControlPlane/serial/StopSecondaryNode 12.89
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 35.73
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.47
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 311.39
171 TestMultiControlPlane/serial/DeleteSecondaryNode 13.67
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
173 TestMultiControlPlane/serial/StopCluster 25.48
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.59
176 TestMultiControlPlane/serial/AddSecondaryNode 70.35
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.76
181 TestJSONOutput/start/Command 78.9
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.75
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.79
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.83
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.24
206 TestKicCustomNetwork/create_custom_network 40.52
207 TestKicCustomNetwork/use_default_bridge_network 35.29
208 TestKicExistingNetwork 35.35
209 TestKicCustomSubnet 34.36
210 TestKicStaticIP 37.52
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 68.18
215 TestMountStart/serial/StartWithMountFirst 6.96
216 TestMountStart/serial/VerifyMountFirst 0.28
217 TestMountStart/serial/StartWithMountSecond 6.89
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.64
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 8.09
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 133.89
227 TestMultiNode/serial/DeployApp2Nodes 6.62
228 TestMultiNode/serial/PingHostFrom2Pods 0.99
229 TestMultiNode/serial/AddNode 27.02
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.34
232 TestMultiNode/serial/CopyFile 10.4
233 TestMultiNode/serial/StopNode 2.27
234 TestMultiNode/serial/StartAfterStop 10.58
235 TestMultiNode/serial/RestartKeepsNodes 103.01
236 TestMultiNode/serial/DeleteNode 5.65
237 TestMultiNode/serial/StopMultiNode 24.02
238 TestMultiNode/serial/RestartMultiNode 50.81
239 TestMultiNode/serial/ValidateNameConflict 35.27
244 TestPreload 131.34
246 TestScheduledStopUnix 106.29
249 TestInsufficientStorage 10.95
250 TestRunningBinaryUpgrade 69.57
252 TestKubernetesUpgrade 385.99
253 TestMissingContainerUpgrade 120.28
255 TestPause/serial/Start 92.65
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 44.84
259 TestNoKubernetes/serial/StartWithStopK8s 7.3
260 TestNoKubernetes/serial/Start 6.44
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
262 TestNoKubernetes/serial/ProfileList 0.98
263 TestNoKubernetes/serial/Stop 1.21
264 TestNoKubernetes/serial/StartNoArgs 7.37
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
273 TestNetworkPlugins/group/false 3.85
277 TestPause/serial/SecondStartNoReconfiguration 29.58
278 TestPause/serial/Pause 0.92
279 TestPause/serial/VerifyStatus 0.38
280 TestPause/serial/Unpause 0.87
281 TestPause/serial/PauseAgain 1.36
282 TestPause/serial/DeletePaused 3.33
283 TestPause/serial/VerifyDeletedResources 0.48
284 TestStoppedBinaryUpgrade/Setup 0.65
285 TestStoppedBinaryUpgrade/Upgrade 111.69
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.06
294 TestNetworkPlugins/group/auto/Start 81.08
295 TestNetworkPlugins/group/auto/KubeletFlags 0.37
296 TestNetworkPlugins/group/auto/NetCatPod 12.45
297 TestNetworkPlugins/group/auto/DNS 0.19
298 TestNetworkPlugins/group/auto/Localhost 0.15
299 TestNetworkPlugins/group/auto/HairPin 0.15
300 TestNetworkPlugins/group/kindnet/Start 48.89
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.31
304 TestNetworkPlugins/group/kindnet/DNS 0.23
305 TestNetworkPlugins/group/kindnet/Localhost 0.18
306 TestNetworkPlugins/group/kindnet/HairPin 0.14
307 TestNetworkPlugins/group/calico/Start 68.62
308 TestNetworkPlugins/group/custom-flannel/Start 55.72
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.37
311 TestNetworkPlugins/group/calico/NetCatPod 13.36
312 TestNetworkPlugins/group/calico/DNS 0.22
313 TestNetworkPlugins/group/calico/Localhost 0.17
314 TestNetworkPlugins/group/calico/HairPin 0.19
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.42
317 TestNetworkPlugins/group/enable-default-cni/Start 44.71
318 TestNetworkPlugins/group/custom-flannel/DNS 0.26
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
321 TestNetworkPlugins/group/flannel/Start 78.68
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 29.37
324 TestNetworkPlugins/group/enable-default-cni/DNS 5.31
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
327 TestNetworkPlugins/group/bridge/Start 80.9
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
330 TestNetworkPlugins/group/flannel/NetCatPod 11.38
331 TestNetworkPlugins/group/flannel/DNS 0.24
332 TestNetworkPlugins/group/flannel/Localhost 0.23
333 TestNetworkPlugins/group/flannel/HairPin 0.22
335 TestStartStop/group/old-k8s-version/serial/FirstStart 193.91
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
337 TestNetworkPlugins/group/bridge/NetCatPod 11.26
338 TestNetworkPlugins/group/bridge/DNS 0.18
339 TestNetworkPlugins/group/bridge/Localhost 0.16
340 TestNetworkPlugins/group/bridge/HairPin 0.15
342 TestStartStop/group/no-preload/serial/FirstStart 69.94
343 TestStartStop/group/no-preload/serial/DeployApp 10.42
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
345 TestStartStop/group/no-preload/serial/Stop 11.93
346 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
347 TestStartStop/group/no-preload/serial/SecondStart 267.56
348 TestStartStop/group/old-k8s-version/serial/DeployApp 11.6
349 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
350 TestStartStop/group/old-k8s-version/serial/Stop 12.16
351 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
352 TestStartStop/group/old-k8s-version/serial/SecondStart 144.06
353 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
354 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
355 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
356 TestStartStop/group/old-k8s-version/serial/Pause 3.08
358 TestStartStop/group/embed-certs/serial/FirstStart 78.51
359 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
360 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
361 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
362 TestStartStop/group/no-preload/serial/Pause 3.14
364 TestStartStop/group/newest-cni/serial/FirstStart 37.44
365 TestStartStop/group/embed-certs/serial/DeployApp 9.4
366 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.46
367 TestStartStop/group/embed-certs/serial/Stop 12.18
368 TestStartStop/group/newest-cni/serial/DeployApp 0
369 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
370 TestStartStop/group/newest-cni/serial/Stop 1.23
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
372 TestStartStop/group/newest-cni/serial/SecondStart 25.31
373 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
374 TestStartStop/group/embed-certs/serial/SecondStart 335.41
375 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
377 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
378 TestStartStop/group/newest-cni/serial/Pause 3.74
380 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.43
381 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
383 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.96
384 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
385 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 295.12
386 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
388 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
389 TestStartStop/group/embed-certs/serial/Pause 3.1
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 3
x
+
TestDownloadOnly/v1.20.0/json-events (9.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-196406 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-196406 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.684826356s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-196406
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-196406: exit status 85 (71.482068ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-196406 | jenkins | v1.34.0 | 15 Sep 24 06:37 UTC |          |
	|         | -p download-only-196406        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:37:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:37:55.683189 2523121 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:37:55.683426 2523121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:37:55.683453 2523121 out.go:358] Setting ErrFile to fd 2...
	I0915 06:37:55.683473 2523121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:37:55.683847 2523121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	W0915 06:37:55.684063 2523121 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19644-2517725/.minikube/config/config.json: open /home/jenkins/minikube-integration/19644-2517725/.minikube/config/config.json: no such file or directory
	I0915 06:37:55.684597 2523121 out.go:352] Setting JSON to true
	I0915 06:37:55.685536 2523121 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":51627,"bootTime":1726330649,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0915 06:37:55.685670 2523121 start.go:139] virtualization:  
	I0915 06:37:55.689662 2523121 out.go:97] [download-only-196406] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0915 06:37:55.689909 2523121 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball: no such file or directory
	I0915 06:37:55.689946 2523121 notify.go:220] Checking for updates...
	I0915 06:37:55.692715 2523121 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:37:55.695394 2523121 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:37:55.698456 2523121 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 06:37:55.701214 2523121 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	I0915 06:37:55.704171 2523121 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0915 06:37:55.709559 2523121 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:37:55.709865 2523121 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:37:55.735473 2523121 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:37:55.735588 2523121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:37:55.791208 2523121 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:37:55.780906444 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:37:55.791328 2523121 docker.go:318] overlay module found
	I0915 06:37:55.794023 2523121 out.go:97] Using the docker driver based on user configuration
	I0915 06:37:55.794079 2523121 start.go:297] selected driver: docker
	I0915 06:37:55.794091 2523121 start.go:901] validating driver "docker" against <nil>
	I0915 06:37:55.794206 2523121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:37:55.849668 2523121 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:37:55.84045006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:37:55.849868 2523121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:37:55.850185 2523121 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0915 06:37:55.850346 2523121 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:37:55.853249 2523121 out.go:169] Using Docker driver with root privileges
	I0915 06:37:55.856011 2523121 cni.go:84] Creating CNI manager for ""
	I0915 06:37:55.856091 2523121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:37:55.856103 2523121 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 06:37:55.856196 2523121 start.go:340] cluster config:
	{Name:download-only-196406 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-196406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:37:55.859027 2523121 out.go:97] Starting "download-only-196406" primary control-plane node in "download-only-196406" cluster
	I0915 06:37:55.859066 2523121 cache.go:121] Beginning downloading kic base image for docker with crio
	I0915 06:37:55.861659 2523121 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:37:55.861710 2523121 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0915 06:37:55.861800 2523121 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:37:55.877256 2523121 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:37:55.877454 2523121 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:37:55.877566 2523121 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:37:55.923583 2523121 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0915 06:37:55.923612 2523121 cache.go:56] Caching tarball of preloaded images
	I0915 06:37:55.924405 2523121 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0915 06:37:55.927369 2523121 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0915 06:37:55.927401 2523121 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0915 06:37:56.011184 2523121 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0915 06:37:59.853995 2523121 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0915 06:37:59.854103 2523121 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19644-2517725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-196406 host does not exist
	  To start a cluster, run: "minikube start -p download-only-196406"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-196406
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-600407 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-600407 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.015474302s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-600407
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-600407: exit status 85 (86.277475ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-196406 | jenkins | v1.34.0 | 15 Sep 24 06:37 UTC |                     |
	|         | -p download-only-196406        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| delete  | -p download-only-196406        | download-only-196406 | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC | 15 Sep 24 06:38 UTC |
	| start   | -o=json --download-only        | download-only-600407 | jenkins | v1.34.0 | 15 Sep 24 06:38 UTC |                     |
	|         | -p download-only-600407        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:38:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:38:05.763757 2523322 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:38:05.764171 2523322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:38:05.764185 2523322 out.go:358] Setting ErrFile to fd 2...
	I0915 06:38:05.764190 2523322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:38:05.764447 2523322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 06:38:05.764956 2523322 out.go:352] Setting JSON to true
	I0915 06:38:05.765804 2523322 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":51637,"bootTime":1726330649,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0915 06:38:05.765881 2523322 start.go:139] virtualization:  
	I0915 06:38:05.767926 2523322 out.go:97] [download-only-600407] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 06:38:05.768249 2523322 notify.go:220] Checking for updates...
	I0915 06:38:05.769589 2523322 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:38:05.770862 2523322 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:38:05.772251 2523322 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 06:38:05.773367 2523322 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	I0915 06:38:05.774646 2523322 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0915 06:38:05.776852 2523322 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:38:05.777086 2523322 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:38:05.799259 2523322 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:38:05.799377 2523322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:38:05.858062 2523322 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:38:05.848682631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:38:05.858185 2523322 docker.go:318] overlay module found
	I0915 06:38:05.859651 2523322 out.go:97] Using the docker driver based on user configuration
	I0915 06:38:05.859683 2523322 start.go:297] selected driver: docker
	I0915 06:38:05.859690 2523322 start.go:901] validating driver "docker" against <nil>
	I0915 06:38:05.859819 2523322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:38:05.914598 2523322 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:38:05.905115611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:38:05.914809 2523322 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:38:05.915087 2523322 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0915 06:38:05.915253 2523322 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:38:05.916771 2523322 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-600407 host does not exist
	  To start a cluster, run: "minikube start -p download-only-600407"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-600407
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-404653 --alsologtostderr --binary-mirror http://127.0.0.1:33149 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-404653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-404653
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-078133
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-078133: exit status 85 (73.422624ms)

                                                
                                                
-- stdout --
	* Profile "addons-078133" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-078133"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-078133
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-078133: exit status 85 (83.893197ms)

                                                
                                                
-- stdout --
	* Profile "addons-078133" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-078133"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (212.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-078133 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-078133 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m32.748387106s)
--- PASS: TestAddons/Setup (212.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-078133 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-078133 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-css4m" [0719df95-52c3-4189-83bb-b5fa0fef2577] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00402049s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-078133
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-078133: (6.060011131s)
--- PASS: TestAddons/parallel/InspektorGadget (12.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.009388ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-078133 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-078133 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0eb5724e-69fb-425e-9974-8725c6f9fe43] Pending
helpers_test.go:344: "task-pv-pod" [0eb5724e-69fb-425e-9974-8725c6f9fe43] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0eb5724e-69fb-425e-9974-8725c6f9fe43] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004079329s
addons_test.go:590: (dbg) Run:  kubectl --context addons-078133 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-078133 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-078133 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-078133 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-078133 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-078133 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-078133 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a4aceba3-66f6-47dd-b22b-3fd3e570b45f] Pending
helpers_test.go:344: "task-pv-pod-restore" [a4aceba3-66f6-47dd-b22b-3fd3e570b45f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a4aceba3-66f6-47dd-b22b-3fd3e570b45f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00353814s
addons_test.go:632: (dbg) Run:  kubectl --context addons-078133 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-078133 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-078133 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-078133 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.810790584s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-078133 addons disable volumesnapshots --alsologtostderr -v=1: (1.02413821s)
--- PASS: TestAddons/parallel/CSI (46.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-078133 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-qw7d8" [5ccf621f-6fae-40ee-a41c-53c53d977f9f] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-qw7d8" [5ccf621f-6fae-40ee-a41c-53c53d977f9f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-qw7d8" [5ccf621f-6fae-40ee-a41c-53c53d977f9f] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003294757s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-078133 addons disable headlamp --alsologtostderr -v=1: (5.767775238s)
--- PASS: TestAddons/parallel/Headlamp (17.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-pw84g" [6af7c52c-0c6e-4dd1-86f6-aec8fca52926] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.042245755s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-078133
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-078133 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-078133 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-078133 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [acf2ee38-acc9-4cb8-a5f7-5fda6973360c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [acf2ee38-acc9-4cb8-a5f7-5fda6973360c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [acf2ee38-acc9-4cb8-a5f7-5fda6973360c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.0051866s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-078133 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 ssh "cat /opt/local-path-provisioner/pvc-5e1f9a51-0651-4cff-bf4b-0987929107ab_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-078133 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-078133 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-078133 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.33589336s)
--- PASS: TestAddons/parallel/LocalPath (53.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cwx62" [6bc66e81-1049-45ef-b236-d0ad12ba82cf] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004184066s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-078133
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-6vdwz" [bde671f2-e21b-4672-9484-ab133bfdb447] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004155191s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-078133 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-078133 addons disable yakd --alsologtostderr -v=1: (5.892200869s)
--- PASS: TestAddons/parallel/Yakd (11.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-078133
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-078133: (6.000113057s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-078133
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-078133
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-078133
--- PASS: TestAddons/StoppedEnableDisable (6.27s)

                                                
                                    
x
+
TestCertOptions (32.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-893608 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-893608 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (29.59972401s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-893608 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-893608 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-893608 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-893608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-893608
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-893608: (2.004492618s)
--- PASS: TestCertOptions (32.29s)

                                                
                                    
x
+
TestCertExpiration (248.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-521905 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0915 07:36:46.045903 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-521905 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.023008363s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-521905 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-521905 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.343324683s)
helpers_test.go:175: Cleaning up "cert-expiration-521905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-521905
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-521905: (2.261809802s)
--- PASS: TestCertExpiration (248.63s)

                                                
                                    
x
+
TestForceSystemdFlag (45.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-732964 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-732964 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.914186119s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-732964 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-732964" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-732964
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-732964: (2.601161266s)
--- PASS: TestForceSystemdFlag (45.88s)

                                                
                                    
x
+
TestForceSystemdEnv (38.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-145779 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-145779 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.595505904s)
helpers_test.go:175: Cleaning up "force-systemd-env-145779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-145779
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-145779: (2.347827191s)
--- PASS: TestForceSystemdEnv (38.94s)

                                                
                                    
x
+
TestErrorSpam/setup (32.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-996911 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-996911 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-996911 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-996911 --driver=docker  --container-runtime=crio: (32.773763412s)
--- PASS: TestErrorSpam/setup (32.77s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 pause
--- PASS: TestErrorSpam/pause (1.93s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 unpause
--- PASS: TestErrorSpam/unpause (1.88s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 stop: (1.271282197s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-996911 --log_dir /tmp/nospam-996911 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19644-2517725/.minikube/files/etc/test/nested/copy/2523116/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-143496 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-143496 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m22.294249638s)
--- PASS: TestFunctional/serial/StartWithProxy (82.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-143496 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-143496 --alsologtostderr -v=8: (26.763683878s)
functional_test.go:663: soft start took 26.767038947s for "functional-143496" cluster.
--- PASS: TestFunctional/serial/SoftStart (26.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-143496 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-143496 cache add registry.k8s.io/pause:3.1: (1.433439411s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-143496 cache add registry.k8s.io/pause:3.3: (1.554679727s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-143496 cache add registry.k8s.io/pause:latest: (1.466060358s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-143496 /tmp/TestFunctionalserialCacheCmdcacheadd_local1154180194/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 cache add minikube-local-cache-test:functional-143496
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 cache delete minikube-local-cache-test:functional-143496
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-143496
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-143496 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.242729ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-143496 cache reload: (1.261537667s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 kubectl -- --context functional-143496 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-143496 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-143496 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-143496 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.159311785s)
functional_test.go:761: restart took 42.159422535s for "functional-143496" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-143496 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-143496 logs: (1.814167649s)
--- PASS: TestFunctional/serial/LogsCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 logs --file /tmp/TestFunctionalserialLogsFileCmd3974620925/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-143496 logs --file /tmp/TestFunctionalserialLogsFileCmd3974620925/001/logs.txt: (1.809911506s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-143496 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-143496
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-143496: exit status 115 (502.223968ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30105 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-143496 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-143496 config get cpus: exit status 14 (65.401186ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-143496 config get cpus: exit status 14 (75.335229ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-143496 --alsologtostderr -v=1]
E0915 07:01:46.046260 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:01:46.056574 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:01:46.067792 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:01:46.089350 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:01:46.131061 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:01:46.212704 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:01:46.374070 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:01:46.696076 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:01:47.337682 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-143496 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2551249: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-143496 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-143496 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (231.035341ms)

                                                
                                                
-- stdout --
	* [functional-143496] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:01:45.385134 2550952 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:01:45.385409 2550952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:01:45.385443 2550952 out.go:358] Setting ErrFile to fd 2...
	I0915 07:01:45.385482 2550952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:01:45.385890 2550952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 07:01:45.386390 2550952 out.go:352] Setting JSON to false
	I0915 07:01:45.387667 2550952 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":53056,"bootTime":1726330649,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0915 07:01:45.387859 2550952 start.go:139] virtualization:  
	I0915 07:01:45.390967 2550952 out.go:177] * [functional-143496] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 07:01:45.394855 2550952 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:01:45.394962 2550952 notify.go:220] Checking for updates...
	I0915 07:01:45.398187 2550952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:01:45.400885 2550952 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 07:01:45.403421 2550952 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	I0915 07:01:45.406676 2550952 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 07:01:45.409417 2550952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:01:45.412724 2550952 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:01:45.413315 2550952 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:01:45.459018 2550952 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 07:01:45.459155 2550952 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:01:45.532922 2550952 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 07:01:45.520583361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 07:01:45.533047 2550952 docker.go:318] overlay module found
	I0915 07:01:45.536078 2550952 out.go:177] * Using the docker driver based on existing profile
	I0915 07:01:45.539106 2550952 start.go:297] selected driver: docker
	I0915 07:01:45.539131 2550952 start.go:901] validating driver "docker" against &{Name:functional-143496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-143496 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:01:45.539262 2550952 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:01:45.542663 2550952 out.go:201] 
	W0915 07:01:45.545350 2550952 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 07:01:45.548069 2550952 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-143496 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-143496 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-143496 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (308.966224ms)

                                                
                                                
-- stdout --
	* [functional-143496] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:01:45.109757 2550906 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:01:45.110033 2550906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:01:45.110069 2550906 out.go:358] Setting ErrFile to fd 2...
	I0915 07:01:45.110209 2550906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:01:45.110790 2550906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 07:01:45.112478 2550906 out.go:352] Setting JSON to false
	I0915 07:01:45.113816 2550906 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":53056,"bootTime":1726330649,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0915 07:01:45.113980 2550906 start.go:139] virtualization:  
	I0915 07:01:45.118247 2550906 out.go:177] * [functional-143496] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0915 07:01:45.121394 2550906 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:01:45.121566 2550906 notify.go:220] Checking for updates...
	I0915 07:01:45.126815 2550906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:01:45.131488 2550906 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 07:01:45.134417 2550906 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	I0915 07:01:45.137634 2550906 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 07:01:45.140604 2550906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:01:45.144437 2550906 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:01:45.146695 2550906 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:01:45.197283 2550906 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 07:01:45.197434 2550906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:01:45.298688 2550906 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 07:01:45.285229192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 07:01:45.298831 2550906 docker.go:318] overlay module found
	I0915 07:01:45.302431 2550906 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0915 07:01:45.305904 2550906 start.go:297] selected driver: docker
	I0915 07:01:45.305936 2550906 start.go:901] validating driver "docker" against &{Name:functional-143496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-143496 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 07:01:45.306059 2550906 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:01:45.310104 2550906 out.go:201] 
	W0915 07:01:45.312952 2550906 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 07:01:45.315834 2550906 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-143496 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-143496 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-8427t" [fa498a1e-aacd-4f68-9187-95436cb94cbb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-8427t" [fa498a1e-aacd-4f68-9187-95436cb94cbb] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.010310089s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30242
functional_test.go:1675: http://192.168.49.2:30242: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-8427t

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30242
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a411d78f-2a50-4dcf-a8ee-032adb71fa8a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003244243s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-143496 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-143496 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-143496 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-143496 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3c3d8165-115e-44c2-95ff-0c4ad98804d8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3c3d8165-115e-44c2-95ff-0c4ad98804d8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003284802s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-143496 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-143496 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-143496 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eb3b8db5-5e3e-4d50-93d2-d44a20201bea] Pending
helpers_test.go:344: "sp-pod" [eb3b8db5-5e3e-4d50-93d2-d44a20201bea] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004200923s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-143496 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh -n functional-143496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 cp functional-143496:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1280271823/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh -n functional-143496 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh -n functional-143496 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2523116/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo cat /etc/test/nested/copy/2523116/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2523116.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo cat /etc/ssl/certs/2523116.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2523116.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo cat /usr/share/ca-certificates/2523116.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/25231162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo cat /etc/ssl/certs/25231162.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/25231162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo cat /usr/share/ca-certificates/25231162.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-143496 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-143496 ssh "sudo systemctl is-active docker": exit status 1 (339.303544ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-143496 ssh "sudo systemctl is-active containerd": exit status 1 (392.58816ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-143496 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-143496 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-143496 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-143496 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2548855: os: process already finished
helpers_test.go:502: unable to terminate pid 2548671: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-143496 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-143496 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c6ded402-0473-48c9-8464-824c29e232c8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c6ded402-0473-48c9-8464-824c29e232c8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00408672s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-143496 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.213.135 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-143496 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-143496 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-143496 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-xbrss" [625ee478-b31c-44dd-80cc-c139eca5d4ad] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-xbrss" [625ee478-b31c-44dd-80cc-c139eca5d4ad] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003614491s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "363.888462ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "90.376159ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "361.804287ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "63.016924ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdany-port3018064306/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726383700507892730" to /tmp/TestFunctionalparallelMountCmdany-port3018064306/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726383700507892730" to /tmp/TestFunctionalparallelMountCmdany-port3018064306/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726383700507892730" to /tmp/TestFunctionalparallelMountCmdany-port3018064306/001/test-1726383700507892730
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (327.655705ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 15 07:01 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 15 07:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 15 07:01 test-1726383700507892730
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh cat /mount-9p/test-1726383700507892730
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-143496 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f6ddbddb-3a82-47df-ba04-43cf01852bc1] Pending
helpers_test.go:344: "busybox-mount" [f6ddbddb-3a82-47df-ba04-43cf01852bc1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f6ddbddb-3a82-47df-ba04-43cf01852bc1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0915 07:01:48.619206 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [f6ddbddb-3a82-47df-ba04-43cf01852bc1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004048605s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-143496 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdany-port3018064306/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 service list -o json
functional_test.go:1494: Took "624.033066ms" to run "out/minikube-linux-arm64 -p functional-143496 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31687
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31687
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdspecific-port2131680518/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (449.584004ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T /mount-9p | grep 9p"
E0915 07:01:51.181552 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdspecific-port2131680518/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-143496 ssh "sudo umount -f /mount-9p": exit status 1 (362.295154ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-143496 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdspecific-port2131680518/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3985064507/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3985064507/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3985064507/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T" /mount1: exit status 1 (1.009987328s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-143496 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3985064507/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3985064507/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-143496 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3985064507/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-143496 version -o=json --components: (1.310042778s)
--- PASS: TestFunctional/parallel/Version/components (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-143496 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-143496
localhost/kicbase/echo-server:functional-143496
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-143496 image ls --format short --alsologtostderr:
I0915 07:02:03.691783 2553793 out.go:345] Setting OutFile to fd 1 ...
I0915 07:02:03.692030 2553793 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 07:02:03.692064 2553793 out.go:358] Setting ErrFile to fd 2...
I0915 07:02:03.692084 2553793 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 07:02:03.692358 2553793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
I0915 07:02:03.693138 2553793 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 07:02:03.693298 2553793 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 07:02:03.693833 2553793 cli_runner.go:164] Run: docker container inspect functional-143496 --format={{.State.Status}}
I0915 07:02:03.713469 2553793 ssh_runner.go:195] Run: systemctl --version
I0915 07:02:03.713525 2553793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-143496
I0915 07:02:03.736507 2553793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35758 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/functional-143496/id_rsa Username:docker}
I0915 07:02:03.833611 2553793 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-143496 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | b887aca7aed61 | 48.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| docker.io/library/nginx                 | latest             | 195245f0c7927 | 197MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| localhost/kicbase/echo-server           | functional-143496  | ce2d2cda2d858 | 4.79MB |
| localhost/minikube-local-cache-test     | functional-143496  | 667d7f03a2f87 | 3.33kB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-143496 image ls --format table --alsologtostderr:
I0915 07:02:04.241874 2553945 out.go:345] Setting OutFile to fd 1 ...
I0915 07:02:04.242097 2553945 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 07:02:04.242125 2553945 out.go:358] Setting ErrFile to fd 2...
I0915 07:02:04.242145 2553945 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 07:02:04.242429 2553945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
I0915 07:02:04.243163 2553945 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 07:02:04.243353 2553945 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 07:02:04.243887 2553945 cli_runner.go:164] Run: docker container inspect functional-143496 --format={{.State.Status}}
I0915 07:02:04.279852 2553945 ssh_runner.go:195] Run: systemctl --version
I0915 07:02:04.279921 2553945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-143496
I0915 07:02:04.302943 2553945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35758 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/functional-143496/id_rsa Username:docker}
I0915 07:02:04.402306 2553945 ssh_runner.go:195] Run: sudo crictl images --output json
E0915 07:02:06.545056 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-143496 image ls --format json --alsologtostderr:
[{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48375489"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"27e3830e1402783674d8
b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["dock
er.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3
eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172029"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34a
b577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-143496"],"size":"4788229"},{"id":"667d7f03a2f874ef090583ac055a82269c515d6a7dc929b7ee41fb62944b043f","repoDigests":["localhost/minikube-local-cache-test@sha256:e292f35f7bcfde84df6153b730d21bebaa15ce0f665bc430d001a8ba38028b66"],"repoTags":["localhost/minikube-local-cache-test:functional-143496"],"size":"3330"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b65
6463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-
scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-143496 image ls --format json --alsologtostderr:
I0915 07:02:03.985786 2553863 out.go:345] Setting OutFile to fd 1 ...
I0915 07:02:03.985972 2553863 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 07:02:03.985987 2553863 out.go:358] Setting ErrFile to fd 2...
I0915 07:02:03.985993 2553863 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 07:02:03.986305 2553863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
I0915 07:02:03.987035 2553863 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 07:02:03.987236 2553863 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 07:02:03.987807 2553863 cli_runner.go:164] Run: docker container inspect functional-143496 --format={{.State.Status}}
I0915 07:02:04.013407 2553863 ssh_runner.go:195] Run: systemctl --version
I0915 07:02:04.013476 2553863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-143496
I0915 07:02:04.036225 2553863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35758 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/functional-143496/id_rsa Username:docker}
I0915 07:02:04.134046 2553863 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-143496 image ls --format yaml --alsologtostderr:
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 667d7f03a2f874ef090583ac055a82269c515d6a7dc929b7ee41fb62944b043f
repoDigests:
- localhost/minikube-local-cache-test@sha256:e292f35f7bcfde84df6153b730d21bebaa15ce0f665bc430d001a8ba38028b66
repoTags:
- localhost/minikube-local-cache-test:functional-143496
size: "3330"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "48375489"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-143496
size: "4788229"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171
repoTags:
- docker.io/library/nginx:latest
size: "197172029"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-143496 image ls --format yaml --alsologtostderr:
I0915 07:02:03.696092 2553794 out.go:345] Setting OutFile to fd 1 ...
I0915 07:02:03.696295 2553794 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 07:02:03.696307 2553794 out.go:358] Setting ErrFile to fd 2...
I0915 07:02:03.696313 2553794 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 07:02:03.696589 2553794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
I0915 07:02:03.697344 2553794 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 07:02:03.697504 2553794 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 07:02:03.698036 2553794 cli_runner.go:164] Run: docker container inspect functional-143496 --format={{.State.Status}}
I0915 07:02:03.719097 2553794 ssh_runner.go:195] Run: systemctl --version
I0915 07:02:03.719161 2553794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-143496
I0915 07:02:03.742468 2553794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35758 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/functional-143496/id_rsa Username:docker}
I0915 07:02:03.847263 2553794 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-143496 ssh pgrep buildkitd: exit status 1 (366.789618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image build -t localhost/my-image:functional-143496 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-143496 image build -t localhost/my-image:functional-143496 testdata/build --alsologtostderr: (2.860181935s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-143496 image build -t localhost/my-image:functional-143496 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d553ba767ed
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-143496
--> 67bf58490cc
Successfully tagged localhost/my-image:functional-143496
67bf58490cc68a0c0b2c9c6819d1af5aec0faad37ec26c2d4d7683665e178623
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-143496 image build -t localhost/my-image:functional-143496 testdata/build --alsologtostderr:
I0915 07:02:04.326006 2553959 out.go:345] Setting OutFile to fd 1 ...
I0915 07:02:04.327026 2553959 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 07:02:04.327076 2553959 out.go:358] Setting ErrFile to fd 2...
I0915 07:02:04.327102 2553959 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 07:02:04.327425 2553959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
I0915 07:02:04.328202 2553959 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 07:02:04.329084 2553959 config.go:182] Loaded profile config "functional-143496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 07:02:04.329720 2553959 cli_runner.go:164] Run: docker container inspect functional-143496 --format={{.State.Status}}
I0915 07:02:04.349526 2553959 ssh_runner.go:195] Run: systemctl --version
I0915 07:02:04.349585 2553959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-143496
I0915 07:02:04.368560 2553959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35758 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/functional-143496/id_rsa Username:docker}
I0915 07:02:04.469552 2553959 build_images.go:161] Building image from path: /tmp/build.2960192481.tar
I0915 07:02:04.469623 2553959 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0915 07:02:04.479373 2553959 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2960192481.tar
I0915 07:02:04.483062 2553959 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2960192481.tar: stat -c "%s %y" /var/lib/minikube/build/build.2960192481.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2960192481.tar': No such file or directory
I0915 07:02:04.483095 2553959 ssh_runner.go:362] scp /tmp/build.2960192481.tar --> /var/lib/minikube/build/build.2960192481.tar (3072 bytes)
I0915 07:02:04.510728 2553959 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2960192481
I0915 07:02:04.521480 2553959 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2960192481 -xf /var/lib/minikube/build/build.2960192481.tar
I0915 07:02:04.531365 2553959 crio.go:315] Building image: /var/lib/minikube/build/build.2960192481
I0915 07:02:04.531451 2553959 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-143496 /var/lib/minikube/build/build.2960192481 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0915 07:02:07.088111 2553959 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-143496 /var/lib/minikube/build/build.2960192481 --cgroup-manager=cgroupfs: (2.556634155s)
I0915 07:02:07.088188 2553959 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2960192481
I0915 07:02:07.098196 2553959 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2960192481.tar
I0915 07:02:07.109867 2553959 build_images.go:217] Built localhost/my-image:functional-143496 from /tmp/build.2960192481.tar
I0915 07:02:07.109899 2553959 build_images.go:133] succeeded building to: functional-143496
I0915 07:02:07.109904 2553959 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
E0915 07:01:56.303495 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-143496
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image load --daemon kicbase/echo-server:functional-143496 --alsologtostderr
2024/09/15 07:01:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-143496 image load --daemon kicbase/echo-server:functional-143496 --alsologtostderr: (1.341469023s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image load --daemon kicbase/echo-server:functional-143496 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-143496
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image load --daemon kicbase/echo-server:functional-143496 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image save kicbase/echo-server:functional-143496 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image rm kicbase/echo-server:functional-143496 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-143496
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-143496 image save --daemon kicbase/echo-server:functional-143496 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-143496
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-143496
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-143496
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-143496
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (174.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-985632 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0915 07:02:27.026476 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:03:07.988932 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:04:29.910809 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-985632 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m53.510879561s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (174.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-985632 -- rollout status deployment/busybox: (6.703514102s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-c958t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-h84wj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-r4wpp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-c958t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-h84wj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-r4wpp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-c958t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-h84wj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-r4wpp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-c958t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-c958t -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-h84wj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-h84wj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-r4wpp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-985632 -- exec busybox-7dff88458-r4wpp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (63.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-985632 -v=7 --alsologtostderr
E0915 07:06:13.595187 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:13.601683 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:13.613108 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:13.634694 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:13.676199 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:13.758882 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:13.920737 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:14.242249 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:14.884297 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:16.165841 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:06:18.728204 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-985632 -v=7 --alsologtostderr: (1m2.561415898s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (63.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-985632 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp testdata/cp-test.txt ha-985632:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3739879315/001/cp-test_ha-985632.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632:/home/docker/cp-test.txt ha-985632-m02:/home/docker/cp-test_ha-985632_ha-985632-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632 "sudo cat /home/docker/cp-test.txt"
E0915 07:06:23.850043 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m02 "sudo cat /home/docker/cp-test_ha-985632_ha-985632-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632:/home/docker/cp-test.txt ha-985632-m03:/home/docker/cp-test_ha-985632_ha-985632-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m03 "sudo cat /home/docker/cp-test_ha-985632_ha-985632-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632:/home/docker/cp-test.txt ha-985632-m04:/home/docker/cp-test_ha-985632_ha-985632-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m04 "sudo cat /home/docker/cp-test_ha-985632_ha-985632-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp testdata/cp-test.txt ha-985632-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3739879315/001/cp-test_ha-985632-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m02:/home/docker/cp-test.txt ha-985632:/home/docker/cp-test_ha-985632-m02_ha-985632.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632 "sudo cat /home/docker/cp-test_ha-985632-m02_ha-985632.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m02:/home/docker/cp-test.txt ha-985632-m03:/home/docker/cp-test_ha-985632-m02_ha-985632-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m03 "sudo cat /home/docker/cp-test_ha-985632-m02_ha-985632-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m02:/home/docker/cp-test.txt ha-985632-m04:/home/docker/cp-test_ha-985632-m02_ha-985632-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m04 "sudo cat /home/docker/cp-test_ha-985632-m02_ha-985632-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp testdata/cp-test.txt ha-985632-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3739879315/001/cp-test_ha-985632-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m03:/home/docker/cp-test.txt ha-985632:/home/docker/cp-test_ha-985632-m03_ha-985632.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632 "sudo cat /home/docker/cp-test_ha-985632-m03_ha-985632.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m03:/home/docker/cp-test.txt ha-985632-m02:/home/docker/cp-test_ha-985632-m03_ha-985632-m02.txt
E0915 07:06:34.092317 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m02 "sudo cat /home/docker/cp-test_ha-985632-m03_ha-985632-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m03:/home/docker/cp-test.txt ha-985632-m04:/home/docker/cp-test_ha-985632-m03_ha-985632-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m04 "sudo cat /home/docker/cp-test_ha-985632-m03_ha-985632-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp testdata/cp-test.txt ha-985632-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3739879315/001/cp-test_ha-985632-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m04:/home/docker/cp-test.txt ha-985632:/home/docker/cp-test_ha-985632-m04_ha-985632.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632 "sudo cat /home/docker/cp-test_ha-985632-m04_ha-985632.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m04:/home/docker/cp-test.txt ha-985632-m02:/home/docker/cp-test_ha-985632-m04_ha-985632-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m02 "sudo cat /home/docker/cp-test_ha-985632-m04_ha-985632-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 cp ha-985632-m04:/home/docker/cp-test.txt ha-985632-m03:/home/docker/cp-test_ha-985632-m04_ha-985632-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 ssh -n ha-985632-m03 "sudo cat /home/docker/cp-test_ha-985632-m04_ha-985632-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 node stop m02 -v=7 --alsologtostderr
E0915 07:06:46.045597 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-985632 node stop m02 -v=7 --alsologtostderr: (12.100812374s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr: exit status 7 (786.723925ms)

                                                
                                                
-- stdout --
	ha-985632
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-985632-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-985632-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-985632-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:06:52.798143 2569623 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:06:52.798271 2569623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:06:52.798286 2569623 out.go:358] Setting ErrFile to fd 2...
	I0915 07:06:52.798291 2569623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:06:52.798551 2569623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 07:06:52.798742 2569623 out.go:352] Setting JSON to false
	I0915 07:06:52.798767 2569623 mustload.go:65] Loading cluster: ha-985632
	I0915 07:06:52.799426 2569623 notify.go:220] Checking for updates...
	I0915 07:06:52.800930 2569623 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:06:52.800963 2569623 status.go:255] checking status of ha-985632 ...
	I0915 07:06:52.801627 2569623 cli_runner.go:164] Run: docker container inspect ha-985632 --format={{.State.Status}}
	I0915 07:06:52.830640 2569623 status.go:330] ha-985632 host status = "Running" (err=<nil>)
	I0915 07:06:52.830664 2569623 host.go:66] Checking if "ha-985632" exists ...
	I0915 07:06:52.831058 2569623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632
	I0915 07:06:52.872864 2569623 host.go:66] Checking if "ha-985632" exists ...
	I0915 07:06:52.873190 2569623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:06:52.873243 2569623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632
	I0915 07:06:52.892718 2569623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35763 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632/id_rsa Username:docker}
	I0915 07:06:52.990709 2569623 ssh_runner.go:195] Run: systemctl --version
	I0915 07:06:52.995858 2569623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:06:53.011443 2569623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:06:53.070222 2569623 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-15 07:06:53.059969807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 07:06:53.070908 2569623 kubeconfig.go:125] found "ha-985632" server: "https://192.168.49.254:8443"
	I0915 07:06:53.070945 2569623 api_server.go:166] Checking apiserver status ...
	I0915 07:06:53.070994 2569623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:06:53.082740 2569623 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	I0915 07:06:53.092576 2569623 api_server.go:182] apiserver freezer: "8:freezer:/docker/473137b9a5acd89e90906d74264015a7d04e6af747aa23db7af2a966f4e17226/crio/crio-d602a1d390506d995c49782f0391d9b0e1ca9729468d98baf9aafcafc8f7c350"
	I0915 07:06:53.092674 2569623 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/473137b9a5acd89e90906d74264015a7d04e6af747aa23db7af2a966f4e17226/crio/crio-d602a1d390506d995c49782f0391d9b0e1ca9729468d98baf9aafcafc8f7c350/freezer.state
	I0915 07:06:53.101751 2569623 api_server.go:204] freezer state: "THAWED"
	I0915 07:06:53.101781 2569623 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0915 07:06:53.109631 2569623 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0915 07:06:53.109660 2569623 status.go:422] ha-985632 apiserver status = Running (err=<nil>)
	I0915 07:06:53.109692 2569623 status.go:257] ha-985632 status: &{Name:ha-985632 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:06:53.109716 2569623 status.go:255] checking status of ha-985632-m02 ...
	I0915 07:06:53.110088 2569623 cli_runner.go:164] Run: docker container inspect ha-985632-m02 --format={{.State.Status}}
	I0915 07:06:53.127654 2569623 status.go:330] ha-985632-m02 host status = "Stopped" (err=<nil>)
	I0915 07:06:53.127687 2569623 status.go:343] host is not running, skipping remaining checks
	I0915 07:06:53.127696 2569623 status.go:257] ha-985632-m02 status: &{Name:ha-985632-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:06:53.127717 2569623 status.go:255] checking status of ha-985632-m03 ...
	I0915 07:06:53.128051 2569623 cli_runner.go:164] Run: docker container inspect ha-985632-m03 --format={{.State.Status}}
	I0915 07:06:53.147208 2569623 status.go:330] ha-985632-m03 host status = "Running" (err=<nil>)
	I0915 07:06:53.147239 2569623 host.go:66] Checking if "ha-985632-m03" exists ...
	I0915 07:06:53.147720 2569623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632-m03
	I0915 07:06:53.167020 2569623 host.go:66] Checking if "ha-985632-m03" exists ...
	I0915 07:06:53.167499 2569623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:06:53.167546 2569623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m03
	I0915 07:06:53.187884 2569623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35773 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m03/id_rsa Username:docker}
	I0915 07:06:53.286610 2569623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:06:53.300932 2569623 kubeconfig.go:125] found "ha-985632" server: "https://192.168.49.254:8443"
	I0915 07:06:53.300973 2569623 api_server.go:166] Checking apiserver status ...
	I0915 07:06:53.301023 2569623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:06:53.312527 2569623 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1289/cgroup
	I0915 07:06:53.323074 2569623 api_server.go:182] apiserver freezer: "8:freezer:/docker/894d00feb70d29b878047ccf8f37d0b56e22a40ead87f803bb3816f6c3fdf6af/crio/crio-a3bf54244b7b172a5f888ae158c28646e923f2e5b084747f8bebd7c87122e8f8"
	I0915 07:06:53.323214 2569623 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/894d00feb70d29b878047ccf8f37d0b56e22a40ead87f803bb3816f6c3fdf6af/crio/crio-a3bf54244b7b172a5f888ae158c28646e923f2e5b084747f8bebd7c87122e8f8/freezer.state
	I0915 07:06:53.332531 2569623 api_server.go:204] freezer state: "THAWED"
	I0915 07:06:53.332573 2569623 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0915 07:06:53.340728 2569623 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0915 07:06:53.340766 2569623 status.go:422] ha-985632-m03 apiserver status = Running (err=<nil>)
	I0915 07:06:53.340776 2569623 status.go:257] ha-985632-m03 status: &{Name:ha-985632-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:06:53.340793 2569623 status.go:255] checking status of ha-985632-m04 ...
	I0915 07:06:53.341214 2569623 cli_runner.go:164] Run: docker container inspect ha-985632-m04 --format={{.State.Status}}
	I0915 07:06:53.360598 2569623 status.go:330] ha-985632-m04 host status = "Running" (err=<nil>)
	I0915 07:06:53.360632 2569623 host.go:66] Checking if "ha-985632-m04" exists ...
	I0915 07:06:53.360973 2569623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-985632-m04
	I0915 07:06:53.379236 2569623 host.go:66] Checking if "ha-985632-m04" exists ...
	I0915 07:06:53.379623 2569623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:06:53.379674 2569623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-985632-m04
	I0915 07:06:53.408065 2569623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35778 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/ha-985632-m04/id_rsa Username:docker}
	I0915 07:06:53.510833 2569623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:06:53.523566 2569623 status.go:257] ha-985632-m04 status: &{Name:ha-985632-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 node start m02 -v=7 --alsologtostderr
E0915 07:06:54.574005 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:07:13.752763 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-985632 node start m02 -v=7 --alsologtostderr: (34.360480758s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr: (1.233918991s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (3.467690522s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (311.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-985632 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-985632 -v=7 --alsologtostderr
E0915 07:07:35.535274 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-985632 -v=7 --alsologtostderr: (37.323502417s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-985632 --wait=true -v=7 --alsologtostderr
E0915 07:08:57.457123 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:11:13.594586 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:11:41.298438 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:11:46.046501 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-985632 --wait=true -v=7 --alsologtostderr: (4m33.915671931s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-985632
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (311.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-985632 node delete m03 -v=7 --alsologtostderr: (12.753183286s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (25.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-985632 stop -v=7 --alsologtostderr: (25.3678019s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr: exit status 7 (114.162246ms)

                                                
                                                
-- stdout --
	ha-985632
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-985632-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-985632-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:13:24.311283 2584283 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:13:24.311429 2584283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:13:24.311440 2584283 out.go:358] Setting ErrFile to fd 2...
	I0915 07:13:24.311446 2584283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:13:24.311689 2584283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 07:13:24.311876 2584283 out.go:352] Setting JSON to false
	I0915 07:13:24.311904 2584283 mustload.go:65] Loading cluster: ha-985632
	I0915 07:13:24.312345 2584283 config.go:182] Loaded profile config "ha-985632": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:13:24.312368 2584283 status.go:255] checking status of ha-985632 ...
	I0915 07:13:24.312952 2584283 cli_runner.go:164] Run: docker container inspect ha-985632 --format={{.State.Status}}
	I0915 07:13:24.313231 2584283 notify.go:220] Checking for updates...
	I0915 07:13:24.331492 2584283 status.go:330] ha-985632 host status = "Stopped" (err=<nil>)
	I0915 07:13:24.331519 2584283 status.go:343] host is not running, skipping remaining checks
	I0915 07:13:24.331527 2584283 status.go:257] ha-985632 status: &{Name:ha-985632 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:13:24.331559 2584283 status.go:255] checking status of ha-985632-m02 ...
	I0915 07:13:24.331891 2584283 cli_runner.go:164] Run: docker container inspect ha-985632-m02 --format={{.State.Status}}
	I0915 07:13:24.349438 2584283 status.go:330] ha-985632-m02 host status = "Stopped" (err=<nil>)
	I0915 07:13:24.349464 2584283 status.go:343] host is not running, skipping remaining checks
	I0915 07:13:24.349472 2584283 status.go:257] ha-985632-m02 status: &{Name:ha-985632-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:13:24.349502 2584283 status.go:255] checking status of ha-985632-m04 ...
	I0915 07:13:24.349812 2584283 cli_runner.go:164] Run: docker container inspect ha-985632-m04 --format={{.State.Status}}
	I0915 07:13:24.377670 2584283 status.go:330] ha-985632-m04 host status = "Stopped" (err=<nil>)
	I0915 07:13:24.377696 2584283 status.go:343] host is not running, skipping remaining checks
	I0915 07:13:24.377705 2584283 status.go:257] ha-985632-m04 status: &{Name:ha-985632-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (25.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-985632 --control-plane -v=7 --alsologtostderr
E0915 07:16:13.595144 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-985632 --control-plane -v=7 --alsologtostderr: (1m9.23678432s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-985632 status -v=7 --alsologtostderr: (1.112907991s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-108849 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0915 07:18:09.114430 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-108849 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m18.885529046s)
--- PASS: TestJSONOutput/start/Command (78.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-108849 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-108849 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-108849 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-108849 --output=json --user=testUser: (5.831404494s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-365921 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-365921 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.860682ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c6c3bacb-167c-4e92-8e71-fd7e47517775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-365921] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bcdaf6eb-de06-40a3-9159-41625a6a5264","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"ba3635aa-8f9b-4d38-93c5-fbe622289b1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b7bf29b-34be-47fa-95d5-80e66eccfb42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig"}}
	{"specversion":"1.0","id":"46141451-211f-4338-9330-bff217d3bd87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube"}}
	{"specversion":"1.0","id":"37fe269a-c30a-4810-9fd3-4adf8c2f167f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"64afeedd-4807-4240-bed3-9d24de0d157e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f6ea8b14-b96c-47fa-b44e-a6e72426d126","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-365921" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-365921
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-762490 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-762490 --network=: (38.370168s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-762490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-762490
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-762490: (2.125735332s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.52s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-314872 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-314872 --network=bridge: (33.239015355s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-314872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-314872
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-314872: (2.009267057s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.29s)

                                                
                                    
x
+
TestKicExistingNetwork (35.35s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-378849 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-378849 --network=existing-network: (33.133292014s)
helpers_test.go:175: Cleaning up "existing-network-378849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-378849
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-378849: (2.019979514s)
--- PASS: TestKicExistingNetwork (35.35s)

                                                
                                    
x
+
TestKicCustomSubnet (34.36s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-785538 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-785538 --subnet=192.168.60.0/24: (32.200553476s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-785538 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-785538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-785538
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-785538: (2.138925487s)
--- PASS: TestKicCustomSubnet (34.36s)

                                                
                                    
x
+
TestKicStaticIP (37.52s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-765048 --static-ip=192.168.200.200
E0915 07:21:13.595142 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-765048 --static-ip=192.168.200.200: (35.208411827s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-765048 ip
helpers_test.go:175: Cleaning up "static-ip-765048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-765048
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-765048: (2.14015111s)
--- PASS: TestKicStaticIP (37.52s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-739670 --driver=docker  --container-runtime=crio
E0915 07:21:46.045976 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-739670 --driver=docker  --container-runtime=crio: (30.374646784s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-742427 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-742427 --driver=docker  --container-runtime=crio: (32.114183757s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-739670
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-742427
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-742427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-742427
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-742427: (2.040126602s)
helpers_test.go:175: Cleaning up "first-739670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-739670
E0915 07:22:36.659856 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-739670: (2.313662569s)
--- PASS: TestMinikubeProfile (68.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-771180 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-771180 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.954879894s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-771180 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-773149 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-773149 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.890212069s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-773149 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-771180 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-771180 --alsologtostderr -v=5: (1.642484242s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-773149 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-773149
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-773149: (1.208531645s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-773149
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-773149: (7.090415167s)
--- PASS: TestMountStart/serial/RestartStopped (8.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-773149 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-721558 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-721558 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m13.386622524s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-721558 -- rollout status deployment/busybox: (4.592657859s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- exec busybox-7dff88458-26l8f -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- exec busybox-7dff88458-tkjqz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- exec busybox-7dff88458-26l8f -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- exec busybox-7dff88458-tkjqz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- exec busybox-7dff88458-26l8f -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- exec busybox-7dff88458-tkjqz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- exec busybox-7dff88458-26l8f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- exec busybox-7dff88458-26l8f -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- exec busybox-7dff88458-tkjqz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721558 -- exec busybox-7dff88458-tkjqz -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-721558 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-721558 -v 3 --alsologtostderr: (26.354336312s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-721558 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp testdata/cp-test.txt multinode-721558:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp multinode-721558:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1276638901/001/cp-test_multinode-721558.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp multinode-721558:/home/docker/cp-test.txt multinode-721558-m02:/home/docker/cp-test_multinode-721558_multinode-721558-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m02 "sudo cat /home/docker/cp-test_multinode-721558_multinode-721558-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp multinode-721558:/home/docker/cp-test.txt multinode-721558-m03:/home/docker/cp-test_multinode-721558_multinode-721558-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m03 "sudo cat /home/docker/cp-test_multinode-721558_multinode-721558-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp testdata/cp-test.txt multinode-721558-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp multinode-721558-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1276638901/001/cp-test_multinode-721558-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp multinode-721558-m02:/home/docker/cp-test.txt multinode-721558:/home/docker/cp-test_multinode-721558-m02_multinode-721558.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558 "sudo cat /home/docker/cp-test_multinode-721558-m02_multinode-721558.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp multinode-721558-m02:/home/docker/cp-test.txt multinode-721558-m03:/home/docker/cp-test_multinode-721558-m02_multinode-721558-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m03 "sudo cat /home/docker/cp-test_multinode-721558-m02_multinode-721558-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp testdata/cp-test.txt multinode-721558-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp multinode-721558-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1276638901/001/cp-test_multinode-721558-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp multinode-721558-m03:/home/docker/cp-test.txt multinode-721558:/home/docker/cp-test_multinode-721558-m03_multinode-721558.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558 "sudo cat /home/docker/cp-test_multinode-721558-m03_multinode-721558.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 cp multinode-721558-m03:/home/docker/cp-test.txt multinode-721558-m02:/home/docker/cp-test_multinode-721558-m03_multinode-721558-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 ssh -n multinode-721558-m02 "sudo cat /home/docker/cp-test_multinode-721558-m03_multinode-721558-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-721558 node stop m03: (1.221823161s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-721558 status: exit status 7 (534.47903ms)

                                                
                                                
-- stdout --
	multinode-721558
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-721558-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-721558-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-721558 status --alsologtostderr: exit status 7 (516.031209ms)

                                                
                                                
-- stdout --
	multinode-721558
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-721558-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-721558-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:26:06.473155 2638687 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:26:06.473456 2638687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:26:06.473495 2638687 out.go:358] Setting ErrFile to fd 2...
	I0915 07:26:06.473529 2638687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:26:06.473889 2638687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 07:26:06.474247 2638687 out.go:352] Setting JSON to false
	I0915 07:26:06.474334 2638687 mustload.go:65] Loading cluster: multinode-721558
	I0915 07:26:06.474404 2638687 notify.go:220] Checking for updates...
	I0915 07:26:06.476639 2638687 config.go:182] Loaded profile config "multinode-721558": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:26:06.476704 2638687 status.go:255] checking status of multinode-721558 ...
	I0915 07:26:06.479882 2638687 cli_runner.go:164] Run: docker container inspect multinode-721558 --format={{.State.Status}}
	I0915 07:26:06.498588 2638687 status.go:330] multinode-721558 host status = "Running" (err=<nil>)
	I0915 07:26:06.498620 2638687 host.go:66] Checking if "multinode-721558" exists ...
	I0915 07:26:06.498929 2638687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-721558
	I0915 07:26:06.517820 2638687 host.go:66] Checking if "multinode-721558" exists ...
	I0915 07:26:06.518157 2638687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:26:06.518212 2638687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-721558
	I0915 07:26:06.539829 2638687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35883 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/multinode-721558/id_rsa Username:docker}
	I0915 07:26:06.634210 2638687 ssh_runner.go:195] Run: systemctl --version
	I0915 07:26:06.638455 2638687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:26:06.650865 2638687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:26:06.707243 2638687 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-15 07:26:06.697607885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 07:26:06.707852 2638687 kubeconfig.go:125] found "multinode-721558" server: "https://192.168.67.2:8443"
	I0915 07:26:06.707889 2638687 api_server.go:166] Checking apiserver status ...
	I0915 07:26:06.707945 2638687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:26:06.719974 2638687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1356/cgroup
	I0915 07:26:06.729656 2638687 api_server.go:182] apiserver freezer: "8:freezer:/docker/7cc314fc192c866f195dbdf3061dca8b44ee49fb9ad2aaafa602cf0613df2e18/crio/crio-4e4b4436a455050ebd533b3775fdda71738e3815e61bc5b1976c7324fec10020"
	I0915 07:26:06.729729 2638687 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7cc314fc192c866f195dbdf3061dca8b44ee49fb9ad2aaafa602cf0613df2e18/crio/crio-4e4b4436a455050ebd533b3775fdda71738e3815e61bc5b1976c7324fec10020/freezer.state
	I0915 07:26:06.739697 2638687 api_server.go:204] freezer state: "THAWED"
	I0915 07:26:06.739726 2638687 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0915 07:26:06.747383 2638687 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0915 07:26:06.747414 2638687 status.go:422] multinode-721558 apiserver status = Running (err=<nil>)
	I0915 07:26:06.747426 2638687 status.go:257] multinode-721558 status: &{Name:multinode-721558 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:26:06.747443 2638687 status.go:255] checking status of multinode-721558-m02 ...
	I0915 07:26:06.747750 2638687 cli_runner.go:164] Run: docker container inspect multinode-721558-m02 --format={{.State.Status}}
	I0915 07:26:06.766678 2638687 status.go:330] multinode-721558-m02 host status = "Running" (err=<nil>)
	I0915 07:26:06.766707 2638687 host.go:66] Checking if "multinode-721558-m02" exists ...
	I0915 07:26:06.767033 2638687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-721558-m02
	I0915 07:26:06.784896 2638687 host.go:66] Checking if "multinode-721558-m02" exists ...
	I0915 07:26:06.785284 2638687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:26:06.785335 2638687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-721558-m02
	I0915 07:26:06.802003 2638687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35888 SSHKeyPath:/home/jenkins/minikube-integration/19644-2517725/.minikube/machines/multinode-721558-m02/id_rsa Username:docker}
	I0915 07:26:06.894077 2638687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:26:06.907185 2638687 status.go:257] multinode-721558-m02 status: &{Name:multinode-721558-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:26:06.907222 2638687 status.go:255] checking status of multinode-721558-m03 ...
	I0915 07:26:06.907587 2638687 cli_runner.go:164] Run: docker container inspect multinode-721558-m03 --format={{.State.Status}}
	I0915 07:26:06.927879 2638687 status.go:330] multinode-721558-m03 host status = "Stopped" (err=<nil>)
	I0915 07:26:06.927901 2638687 status.go:343] host is not running, skipping remaining checks
	I0915 07:26:06.927922 2638687 status.go:257] multinode-721558-m03 status: &{Name:multinode-721558-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 node start m03 -v=7 --alsologtostderr
E0915 07:26:13.594583 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-721558 node start m03 -v=7 --alsologtostderr: (9.792460271s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-721558
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-721558
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-721558: (24.960744956s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-721558 --wait=true -v=8 --alsologtostderr
E0915 07:26:46.046075 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-721558 --wait=true -v=8 --alsologtostderr: (1m17.837021135s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-721558
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-721558 node delete m03: (4.945156038s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-721558 stop: (23.754159281s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-721558 status: exit status 7 (115.134753ms)

                                                
                                                
-- stdout --
	multinode-721558
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-721558-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-721558 status --alsologtostderr: exit status 7 (149.043367ms)

                                                
                                                
-- stdout --
	multinode-721558
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-721558-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:28:30.104912 2646521 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:28:30.105065 2646521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:28:30.105074 2646521 out.go:358] Setting ErrFile to fd 2...
	I0915 07:28:30.105080 2646521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:28:30.105357 2646521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 07:28:30.105604 2646521 out.go:352] Setting JSON to false
	I0915 07:28:30.105658 2646521 mustload.go:65] Loading cluster: multinode-721558
	I0915 07:28:30.105720 2646521 notify.go:220] Checking for updates...
	I0915 07:28:30.106135 2646521 config.go:182] Loaded profile config "multinode-721558": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:28:30.106149 2646521 status.go:255] checking status of multinode-721558 ...
	I0915 07:28:30.106764 2646521 cli_runner.go:164] Run: docker container inspect multinode-721558 --format={{.State.Status}}
	I0915 07:28:30.170824 2646521 status.go:330] multinode-721558 host status = "Stopped" (err=<nil>)
	I0915 07:28:30.170847 2646521 status.go:343] host is not running, skipping remaining checks
	I0915 07:28:30.170856 2646521 status.go:257] multinode-721558 status: &{Name:multinode-721558 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:28:30.170894 2646521 status.go:255] checking status of multinode-721558-m02 ...
	I0915 07:28:30.171247 2646521 cli_runner.go:164] Run: docker container inspect multinode-721558-m02 --format={{.State.Status}}
	I0915 07:28:30.190439 2646521 status.go:330] multinode-721558-m02 host status = "Stopped" (err=<nil>)
	I0915 07:28:30.190467 2646521 status.go:343] host is not running, skipping remaining checks
	I0915 07:28:30.190475 2646521 status.go:257] multinode-721558-m02 status: &{Name:multinode-721558-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-721558 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-721558 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (50.159668211s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721558 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-721558
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-721558-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-721558-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.411565ms)

                                                
                                                
-- stdout --
	* [multinode-721558-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-721558-m02' is duplicated with machine name 'multinode-721558-m02' in profile 'multinode-721558'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-721558-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-721558-m03 --driver=docker  --container-runtime=crio: (32.800530365s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-721558
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-721558: exit status 80 (350.500414ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-721558 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-721558-m03 already exists in multinode-721558-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-721558-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-721558-m03: (1.97442896s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.27s)

                                                
                                    
x
+
TestPreload (131.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-427852 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0915 07:31:13.594686 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-427852 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m38.786974054s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-427852 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-427852 image pull gcr.io/k8s-minikube/busybox: (3.143776234s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-427852
E0915 07:31:46.045983 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-427852: (5.757112507s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-427852 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-427852 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.985462326s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-427852 image list
helpers_test.go:175: Cleaning up "test-preload-427852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-427852
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-427852: (2.351424222s)
--- PASS: TestPreload (131.34s)

                                                
                                    
x
+
TestScheduledStopUnix (106.29s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-842863 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-842863 --memory=2048 --driver=docker  --container-runtime=crio: (30.485768966s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-842863 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-842863 -n scheduled-stop-842863
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-842863 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-842863 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-842863 -n scheduled-stop-842863
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-842863
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-842863 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-842863
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-842863: exit status 7 (67.236261ms)

                                                
                                                
-- stdout --
	scheduled-stop-842863
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-842863 -n scheduled-stop-842863
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-842863 -n scheduled-stop-842863: exit status 7 (70.47372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-842863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-842863
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-842863: (4.302061038s)
--- PASS: TestScheduledStopUnix (106.29s)

                                                
                                    
x
+
TestInsufficientStorage (10.95s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-357865 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-357865 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.417516211s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"51e1749b-974f-42be-bb65-e064dd6aa25b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-357865] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"810e0145-2430-44f1-b8e8-daecad854a8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"8b566cdb-dae7-4e90-9a45-4334128e2678","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4b695aea-b79e-4781-9f4a-715bae3394cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig"}}
	{"specversion":"1.0","id":"d145194f-1ed4-47f9-9ef0-d66766b5680c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube"}}
	{"specversion":"1.0","id":"05983ca6-a54e-4886-8c80-07d04eab4dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"54e6de9a-0b3e-49a9-bee5-2926b4c1c5c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"91e9d693-6104-4649-90ea-5c4087099f2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e1fca3cb-c9dd-48d5-8b09-fa1f7adf3c92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f595961b-0f98-46bd-886e-00faacfa4c88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"275b7f64-beb9-4b63-b368-dde519cc9ffe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"342d0e18-1356-4ebd-a1d1-2bd871c80541","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-357865\" primary control-plane node in \"insufficient-storage-357865\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e22ff941-3ef9-4827-9236-64c8c30e2533","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726358845-19644 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"502dc99b-305a-4562-940d-a6392cef18f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6cf282e-719f-4d94-ba92-f36dd5419f7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-357865 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-357865 --output=json --layout=cluster: exit status 7 (295.002844ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-357865","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-357865","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:34:06.871095 2664005 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-357865" does not appear in /home/jenkins/minikube-integration/19644-2517725/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-357865 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-357865 --output=json --layout=cluster: exit status 7 (290.767443ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-357865","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-357865","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:34:07.166677 2664067 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-357865" does not appear in /home/jenkins/minikube-integration/19644-2517725/kubeconfig
	E0915 07:34:07.177490 2664067 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/insufficient-storage-357865/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-357865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-357865
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-357865: (1.945256387s)
--- PASS: TestInsufficientStorage (10.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2631666146 start -p running-upgrade-084373 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2631666146 start -p running-upgrade-084373 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.054412723s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-084373 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-084373 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.476186086s)
helpers_test.go:175: Cleaning up "running-upgrade-084373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-084373
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-084373: (3.420313258s)
--- PASS: TestRunningBinaryUpgrade (69.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-385428 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0915 07:41:13.595123 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-385428 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m11.061409352s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-385428
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-385428: (1.339154235s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-385428 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-385428 status --format={{.Host}}: exit status 7 (90.858438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-385428 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0915 07:41:46.045507 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-385428 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.576014519s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-385428 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-385428 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-385428 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (118.295988ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-385428] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-385428
	    minikube start -p kubernetes-upgrade-385428 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3854282 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-385428 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-385428 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0915 07:46:46.045855 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-385428 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.145926941s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-385428" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-385428
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-385428: (2.548382645s)
--- PASS: TestKubernetesUpgrade (385.99s)

                                                
                                    
x
+
TestMissingContainerUpgrade (120.28s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1291680427 start -p missing-upgrade-276615 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1291680427 start -p missing-upgrade-276615 --memory=2200 --driver=docker  --container-runtime=crio: (48.207470044s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-276615
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-276615: (10.559109145s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-276615
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-276615 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-276615 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.642224306s)
helpers_test.go:175: Cleaning up "missing-upgrade-276615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-276615
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-276615: (2.172608698s)
--- PASS: TestMissingContainerUpgrade (120.28s)

                                                
                                    
x
+
TestPause/serial/Start (92.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-546742 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-546742 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m32.647474506s)
--- PASS: TestPause/serial/Start (92.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-779319 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-779319 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (103.094504ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-779319] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-779319 --driver=docker  --container-runtime=crio
E0915 07:34:49.117670 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-779319 --driver=docker  --container-runtime=crio: (44.34753885s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-779319 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-779319 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-779319 --no-kubernetes --driver=docker  --container-runtime=crio: (4.997845002s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-779319 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-779319 status -o json: exit status 2 (314.212033ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-779319","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-779319
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-779319: (1.983223222s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-779319 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-779319 --no-kubernetes --driver=docker  --container-runtime=crio: (6.435199947s)
--- PASS: TestNoKubernetes/serial/Start (6.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-779319 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-779319 "sudo systemctl is-active --quiet service kubelet": exit status 1 (291.091932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-779319
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-779319: (1.214737553s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-779319 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-779319 --driver=docker  --container-runtime=crio: (7.370703067s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-779319 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-779319 "sudo systemctl is-active --quiet service kubelet": exit status 1 (307.434327ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-316191 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-316191 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (195.979535ms)

                                                
                                                
-- stdout --
	* [false-316191] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:35:23.333906 2674091 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:35:23.334203 2674091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:35:23.334236 2674091 out.go:358] Setting ErrFile to fd 2...
	I0915 07:35:23.334256 2674091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:35:23.334556 2674091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-2517725/.minikube/bin
	I0915 07:35:23.335046 2674091 out.go:352] Setting JSON to false
	I0915 07:35:23.336052 2674091 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":55074,"bootTime":1726330649,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0915 07:35:23.336163 2674091 start.go:139] virtualization:  
	I0915 07:35:23.339401 2674091 out.go:177] * [false-316191] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 07:35:23.342975 2674091 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:35:23.343062 2674091 notify.go:220] Checking for updates...
	I0915 07:35:23.349438 2674091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:35:23.352222 2674091 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-2517725/kubeconfig
	I0915 07:35:23.354815 2674091 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-2517725/.minikube
	I0915 07:35:23.357547 2674091 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 07:35:23.360426 2674091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:35:23.363531 2674091 config.go:182] Loaded profile config "pause-546742": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:35:23.363658 2674091 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:35:23.396997 2674091 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 07:35:23.397109 2674091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:35:23.456153 2674091 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-15 07:35:23.446092901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 07:35:23.456263 2674091 docker.go:318] overlay module found
	I0915 07:35:23.460785 2674091 out.go:177] * Using the docker driver based on user configuration
	I0915 07:35:23.463470 2674091 start.go:297] selected driver: docker
	I0915 07:35:23.463488 2674091 start.go:901] validating driver "docker" against <nil>
	I0915 07:35:23.463502 2674091 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:35:23.466635 2674091 out.go:201] 
	W0915 07:35:23.469188 2674091 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0915 07:35:23.471953 2674091 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-316191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-316191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:34:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-546742
contexts:
- context:
cluster: pause-546742
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:34:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-546742
name: pause-546742
current-context: pause-546742
kind: Config
preferences: {}
users:
- name: pause-546742
user:
client-certificate: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/pause-546742/client.crt
client-key: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/pause-546742/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-316191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-316191"

                                                
                                                
----------------------- debugLogs end: false-316191 [took: 3.480265484s] --------------------------------
helpers_test.go:175: Cleaning up "false-316191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-316191
--- PASS: TestNetworkPlugins/group/false (3.85s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-546742 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-546742 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.549066589s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.58s)

                                                
                                    
x
+
TestPause/serial/Pause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-546742 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.92s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-546742 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-546742 --output=json --layout=cluster: exit status 2 (375.96092ms)

                                                
                                                
-- stdout --
	{"Name":"pause-546742","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-546742","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-546742 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.36s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-546742 --alsologtostderr -v=5
E0915 07:36:13.594884 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-546742 --alsologtostderr -v=5: (1.361135297s)
--- PASS: TestPause/serial/PauseAgain (1.36s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.33s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-546742 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-546742 --alsologtostderr -v=5: (3.326994428s)
--- PASS: TestPause/serial/DeletePaused (3.33s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-546742
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-546742: exit status 1 (29.401539ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-546742: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3167270774 start -p stopped-upgrade-400377 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3167270774 start -p stopped-upgrade-400377 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m17.825961268s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3167270774 -p stopped-upgrade-400377 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3167270774 -p stopped-upgrade-400377 stop: (2.56494334s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-400377 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0915 07:39:16.661970 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-400377 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.294499639s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-400377
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-400377: (1.063348962s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m21.076134361s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-316191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-316191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2fd8p" [d96536d8-3997-4dcc-be43-f281b714614a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2fd8p" [d96536d8-3997-4dcc-be43-f281b714614a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003588622s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-316191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (48.885626096s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-28jhg" [f1731125-bdf4-420b-bd00-c98ad54dbceb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003639014s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-316191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-316191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4bmzb" [ba94f5cf-e0f2-43f6-b688-2a1a1da7ef37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4bmzb" [ba94f5cf-e0f2-43f6-b688-2a1a1da7ef37] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004941438s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-316191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0915 07:46:13.595252 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.623239447s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.722092406s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vhc7k" [d1fa3fa3-d6f6-4a21-8104-38103104d0ac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005864217s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-316191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-316191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ngblh" [fb868506-ca98-4d22-93c3-89e4a272943f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ngblh" [fb868506-ca98-4d22-93c3-89e4a272943f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004467511s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-316191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-316191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-316191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w7l7w" [753be966-752b-496b-882c-2addb2ec44f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-w7l7w" [753be966-752b-496b-882c-2addb2ec44f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.007803821s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (44.707709114s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-316191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (78.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m18.677650878s)
--- PASS: TestNetworkPlugins/group/flannel/Start (78.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-316191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (29.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-316191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4cngx" [862a8e4e-d602-4cba-99dd-cc34cc99ccf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 07:48:55.452146 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:55.458531 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:55.470031 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:55.491506 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:55.533028 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:55.614553 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:55.776163 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:56.098210 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:56.740575 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:48:58.022573 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-4cngx" [862a8e4e-d602-4cba-99dd-cc34cc99ccf3] Running
E0915 07:49:00.584376 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 29.004841077s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (29.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (5.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-316191 exec deployment/netcat -- nslookup kubernetes.default
E0915 07:49:05.706686 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-316191 exec deployment/netcat -- nslookup kubernetes.default: (5.314526723s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0915 07:49:36.429542 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-316191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m20.899168561s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kcbxl" [dca596c4-fcdc-4747-9fdc-b349c2ffd5ce] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0049747s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-316191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-316191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wsw7d" [c5b61902-9d52-418f-a578-fe9fbbe55c49] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wsw7d" [c5b61902-9d52-418f-a578-fe9fbbe55c49] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004263107s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-316191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (193.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-338960 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0915 07:50:38.646561 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/kindnet-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-338960 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m13.906548625s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (193.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-316191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-316191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d9sjn" [47122a5b-1817-4d22-8825-875f840fb429] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 07:50:59.128422 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/kindnet-316191/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-d9sjn" [47122a5b-1817-4d22-8825-875f840fb429] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00504992s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-316191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-316191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E0915 08:04:48.232320 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:05:18.152507 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/kindnet-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:05:18.516709 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:05:23.978061 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-498410 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0915 07:51:39.313264 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:51:40.089965 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/kindnet-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:51:46.045896 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:06.194117 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:06.200598 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:06.212010 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:06.233387 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:06.274755 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:06.358592 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:06.520213 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:06.841861 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:07.483366 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:08.764931 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:11.326851 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:16.448707 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:26.690475 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-498410 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m9.938789229s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-498410 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9969ca7b-acc9-4d60-878d-8f85813721f2] Pending
helpers_test.go:344: "busybox" [9969ca7b-acc9-4d60-878d-8f85813721f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9969ca7b-acc9-4d60-878d-8f85813721f2] Running
E0915 07:52:47.172973 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:50.016215 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:50.024891 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:50.036464 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:50.057890 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:50.099474 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003766404s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-498410 exec busybox -- /bin/sh -c "ulimit -n"
E0915 07:52:50.181126 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-498410 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0915 07:52:50.343183 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:50.664935 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:51.306816 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-498410 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.046325761s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-498410 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-498410 --alsologtostderr -v=3
E0915 07:52:52.588308 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:52:55.150049 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:00.271494 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:02.012480 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/kindnet-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-498410 --alsologtostderr -v=3: (11.929453087s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-498410 -n no-preload-498410
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-498410 -n no-preload-498410: exit status 7 (74.797678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-498410 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-498410 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0915 07:53:10.514021 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:28.135030 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:30.996026 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:35.155821 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:35.162307 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:35.173854 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:35.195339 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:35.236863 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:35.318278 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:35.480119 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:35.801914 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:36.444014 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:37.725668 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:53:40.287815 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-498410 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m27.184270598s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-498410 -n no-preload-498410
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-338960 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [33962967-71e6-4efd-ba08-6746abe61d4f] Pending
helpers_test.go:344: "busybox" [33962967-71e6-4efd-ba08-6746abe61d4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0915 07:53:45.409902 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [33962967-71e6-4efd-ba08-6746abe61d4f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004051335s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-338960 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-338960 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-338960 describe deploy/metrics-server -n kube-system
E0915 07:53:55.452188 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-338960 --alsologtostderr -v=3
E0915 07:53:55.651370 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-338960 --alsologtostderr -v=3: (12.155713031s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-338960 -n old-k8s-version-338960
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-338960 -n old-k8s-version-338960: exit status 7 (71.306778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-338960 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (144.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-338960 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0915 07:54:11.957360 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:16.133118 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:23.155376 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:48.232616 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:48.239146 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:48.250637 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:48.272270 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:48.313777 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:48.395293 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:48.556909 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:48.878582 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:49.520585 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:50.057319 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:50.802863 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:53.364689 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:57.094875 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:54:58.486107 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:08.727818 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:18.153143 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/kindnet-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:29.210048 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:33.879505 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:45.854689 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/kindnet-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:54.760745 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:54.767185 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:54.778629 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:54.800060 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:54.841693 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:54.923296 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:55.084738 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:55.407247 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:56.049554 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:56.664156 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:57.330911 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:55:59.892373 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:56:05.013922 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:56:10.172368 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:56:13.594840 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:56:15.256474 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:56:19.016417 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-338960 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m23.695766952s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-338960 -n old-k8s-version-338960
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (144.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8jwdc" [a0cd2653-77af-491b-992b-f96be53f577a] Running
E0915 07:56:35.738162 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004746595s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8jwdc" [a0cd2653-77af-491b-992b-f96be53f577a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004669046s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-338960 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-338960 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-338960 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-338960 -n old-k8s-version-338960
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-338960 -n old-k8s-version-338960: exit status 2 (320.569336ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-338960 -n old-k8s-version-338960
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-338960 -n old-k8s-version-338960: exit status 2 (322.862131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-338960 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-338960 -n old-k8s-version-338960
E0915 07:56:46.045834 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-338960 -n old-k8s-version-338960
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-540064 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0915 07:57:06.193644 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:57:16.699504 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-540064 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m18.510861796s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sh4z7" [4cd9279d-6a69-492e-9207-5342041f7c8f] Running
E0915 07:57:32.093795 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:57:33.899366 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004035368s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sh4z7" [4cd9279d-6a69-492e-9207-5342041f7c8f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005273221s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-498410 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-498410 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-498410 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-498410 -n no-preload-498410
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-498410 -n no-preload-498410: exit status 2 (329.216914ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-498410 -n no-preload-498410
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-498410 -n no-preload-498410: exit status 2 (317.760941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-498410 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-498410 -n no-preload-498410
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-498410 -n no-preload-498410
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-641711 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0915 07:57:50.016121 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-641711 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (37.43751908s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-540064 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ab050e76-a60e-4e03-9d6e-90065c24a9cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ab050e76-a60e-4e03-9d6e-90065c24a9cc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003400419s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-540064 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-540064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0915 07:58:17.721711 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-540064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.304723707s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-540064 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-540064 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-540064 --alsologtostderr -v=3: (12.179763144s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-641711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-641711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.039508955s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-641711 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-641711 --alsologtostderr -v=3: (1.234842165s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-641711 -n newest-cni-641711
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-641711 -n newest-cni-641711: exit status 7 (72.545808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-641711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-641711 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-641711 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (24.886504103s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-641711 -n newest-cni-641711
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-540064 -n embed-certs-540064
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-540064 -n embed-certs-540064: exit status 7 (67.932848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-540064 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (335.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-540064 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0915 07:58:35.155288 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:38.621737 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:43.219393 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:43.225705 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:43.237017 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:43.258344 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:43.299700 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:43.381079 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:43.542585 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:43.864540 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:44.506360 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:45.794009 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:48.355316 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:58:53.477125 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-540064 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (5m35.037383343s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-540064 -n embed-certs-540064
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (335.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-641711 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-641711 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-641711 -n newest-cni-641711
E0915 07:58:55.451498 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-641711 -n newest-cni-641711: exit status 2 (344.064277ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-641711 -n newest-cni-641711
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-641711 -n newest-cni-641711: exit status 2 (329.966012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-641711 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-641711 -n newest-cni-641711
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-641711 -n newest-cni-641711
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-323115 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0915 07:59:02.857940 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:59:03.718438 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:59:24.200385 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:59:48.232243 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:00:05.161888 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:00:15.935376 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:00:18.153132 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/kindnet-316191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-323115 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m20.427526579s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-323115 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bd943633-c193-499f-96e9-acc4ef18572d] Pending
helpers_test.go:344: "busybox" [bd943633-c193-499f-96e9-acc4ef18572d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bd943633-c193-499f-96e9-acc4ef18572d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003574669s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-323115 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-323115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-323115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.07278198s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-323115 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-323115 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-323115 --alsologtostderr -v=3: (11.958052986s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-323115 -n default-k8s-diff-port-323115
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-323115 -n default-k8s-diff-port-323115: exit status 7 (75.303827ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-323115 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (295.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-323115 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0915 08:00:54.761153 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:01:13.594586 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/functional-143496/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:01:22.463955 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/bridge-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:01:27.083458 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:01:46.046218 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/addons-078133/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:06.193386 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/calico-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:40.117396 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:40.123969 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:40.135513 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:40.157023 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:40.198546 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:40.279981 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:40.441881 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:40.763549 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:41.404940 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:42.686552 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:45.248705 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:50.016919 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/custom-flannel-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:02:50.370880 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:03:00.612443 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:03:21.094562 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:03:35.155714 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/enable-default-cni-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:03:43.219428 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:03:55.451549 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/auto-316191/client.crt: no such file or directory" logger="UnhandledError"
E0915 08:04:02.055918 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/no-preload-498410/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-323115 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m54.752183026s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-323115 -n default-k8s-diff-port-323115
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (295.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vvczl" [6b66935e-0ab6-4fca-ba17-5bed2401ea23] Running
E0915 08:04:10.925389 2523116 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/old-k8s-version-338960/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004402347s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vvczl" [6b66935e-0ab6-4fca-ba17-5bed2401ea23] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004491766s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-540064 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-540064 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-540064 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-540064 -n embed-certs-540064
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-540064 -n embed-certs-540064: exit status 2 (342.53373ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-540064 -n embed-certs-540064
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-540064 -n embed-certs-540064: exit status 2 (329.804042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-540064 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-540064 -n embed-certs-540064
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-540064 -n embed-certs-540064
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pvlgg" [1588c6c0-55eb-4f40-906d-62735fb57089] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004188985s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pvlgg" [1588c6c0-55eb-4f40-906d-62735fb57089] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003498432s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-323115 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-323115 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-323115 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-323115 -n default-k8s-diff-port-323115
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-323115 -n default-k8s-diff-port-323115: exit status 2 (322.395211ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-323115 -n default-k8s-diff-port-323115
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-323115 -n default-k8s-diff-port-323115: exit status 2 (317.881473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-323115 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-323115 -n default-k8s-diff-port-323115
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-323115 -n default-k8s-diff-port-323115
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.00s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-842211 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-842211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-842211
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-316191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-316191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:34:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-546742
contexts:
- context:
cluster: pause-546742
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:34:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-546742
name: pause-546742
current-context: pause-546742
kind: Config
preferences: {}
users:
- name: pause-546742
user:
client-certificate: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/pause-546742/client.crt
client-key: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/pause-546742/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-316191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-316191"

                                                
                                                
----------------------- debugLogs end: kubenet-316191 [took: 3.456647097s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-316191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-316191
--- SKIP: TestNetworkPlugins/group/kubenet (3.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-316191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-316191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-2517725/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:34:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-546742
contexts:
- context:
cluster: pause-546742
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:34:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-546742
name: pause-546742
current-context: pause-546742
kind: Config
preferences: {}
users:
- name: pause-546742
user:
client-certificate: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/pause-546742/client.crt
client-key: /home/jenkins/minikube-integration/19644-2517725/.minikube/profiles/pause-546742/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-316191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-316191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-316191"

                                                
                                                
----------------------- debugLogs end: cilium-316191 [took: 3.950642999s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-316191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-316191
--- SKIP: TestNetworkPlugins/group/cilium (4.11s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-925326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-925326
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard